file_path
stringlengths 45
45
| text
stringlengths 1.02k
568k
| publisher
stringclasses 1
value | title
stringlengths 7
235
| authors
stringlengths 5
404
⌀ | url
stringlengths 29
65
| year
int64 1.94k
2.03k
| license
stringclasses 1
value |
|---|---|---|---|---|---|---|---|
isprs/89fdb922_35e5_4fbb_bf46_a0d4b86d5e29.md
|
# Impact of Geolocation Data on Augmented Reality Usability:
A Comparative User Test
[PERSON]
[PERSON]
1 Media Engineering Institute (MEI), School of Engineering and Management Vaud, HES-SO, Yverdon-les-Bains, Switzerland - (julien.mercier, nicolas.chabloz, gregory.dozot, olivier.ertz, daniel.rappo)@heig-vd.ch
[PERSON]
1 Media Engineering Institute (MEI), School of Engineering and Management Vaud, HES-SO, Yverdon-les-Bains, Switzerland - (julien.mercier, nicolas.chabloz, gregory.dozot, olivier.ertz, daniel.rappo)@heig-vd.ch
[PERSON]
3 University of Teacher Education, HES-SO, Lausanne, Switzerland - [PERSON]
[PERSON]
1 Media Engineering Institute (MEI), School of Engineering and Management Vaud, HES-SO, Yverdon-les-Bains, Switzerland - (julien.mercier, nicolas.chabloz, gregory.dozot, olivier.ertz, daniel.rappo)@heig-vd.ch
[PERSON]
2 Lab-STICC, UMR 6285, CNRS, Universite Bretagne Sud, Vannes, France - [EMAIL_ADDRESS], [PERSON]SS]
[PERSON]
1 Media Engineering Institute (MEI), School of Engineering and Management Vaud, HES-SO, Yverdon-les-Bains, Switzerland - (julien.mercier, nicolas.chabloz, gregory.dozot, olivier.ertz, daniel.rappo)@heig-vd.ch
###### Abstract
While the use of location-based augmented reality (AR) for education has demonstrated benefits on participants' motivation, engagement, and on their physical activity, geolocation data inaccuracy causes augmented objects to jitter or drift, which is a factor in downgrading user experience. We developed a free and open source web AR application and conducted a comparative user test (n = 54) in order to assess the impact of geolocation data on usability, exploration, and focus. A control group explored biodiversity in nature using the system in combination with embedded GNSS data, and an experimental group used an external module for RTK data. During the test, eye tracking data, geolocated traces, and in-app user-triggered events were recorded. Participants answered usability questionnaires (SUS, UEQ, HARUS). We found that the geolocation data the RTK group was exposed to was less accurate in average than that of the control group. The RTK group reported lower usability scores on all scales, of which 5 out of 9 were significant, indicating that inaccurate data negatively predicts usability. The GNSS group walked more than the RTK group, indicating a partial effect on exploration. We found no significant effect on interaction time with the screen, indicating no specific relation between data accuracy and focus. While RTK data did not allow us to better the usability of location-based AR interfaces, results allow us to assess our system's overall usability as excellent, and to define optimal operating conditions for future use with pupils.
## 1 Introduction
This study is part of the ongoing _BiodivAR_ project, which attempts to assess the potential benefits of using augmented reality (AR) for outdoor education on biodiversity. In AR interfaces, digital objects can be overlaid on top of users' field of view in real-time, through the screen of a mobile device or a head-mounted display. When used sensibly in an educational setting, it may convey the impression of an enriched environment and make the material more attractive, thus motivating students to learn ([PERSON], 2020, [PERSON] et al., 2022). The most reported positive effects of AR in education are learning gains and motivation ([PERSON] et al., 2014). Our research is focused on the use of _location-based_ AR in particular, where the position of augmented objects is computed based on their geographic coordinates relative to the user's location as estimated by the mobile device's GNSS. With this technology, augmented objects can be built remotely from any given geodata, as opposed to marker-based AR which requires physical markers to be physically placed on target locations. Location-based AR specially promotes learning in context ([PERSON] et al., 2021, [PERSON] et al., 2014), ecological engagement ([PERSON] et al., 2010), and causes users to experience a positive interdependence with nature ([PERSON] et al., 2011), which fosters improved immersion and learning. Last but not least, location-based AR shows positive effects on the physical activity of users across genders, ages, weight status, and prior activity levels ([PERSON] et al., 2017). However, location-based AR requires steady and continuously accurate data to operate. While GNSS technology has evolved and improved in the past decades, it has been more of an evolution than a revolution. Usability issues have been reported by a number of studies ([PERSON] et al., 2014, [PERSON] et al., 2009, [PERSON] and [PERSON], 2013, [PERSON] et al., 2011, [PERSON] et al., 2012), most of which blame the inaccuracy of mobile devices' embedded GNSS sensors. Some studies considered that these recurring problems made AR distracting and frustrating and eventually favored marker-based AR, which is more advanced and offers better user experience ([PERSON] and [PERSON], 2013, [PERSON] et al., 2018).
## 2 Background
A first proof-of-concept was developed in 2017, featuring a series of geolocated points of interest (POIs) on biodiversity. A test with ten-year-old pupils confirmed the relevance of using AR to support educational field trips ([PERSON] et al., 2018) while also revealed usability challenges:
1. The system should allow non-expert users to create AR experiences ([PERSON] et al., 2015)
2. Users should be able to publish observations rather than being restricted to a passive viewing role;
3. The instability of augmented objects deteriorates usability. Participants spent 88.5 % of the time looking at the tablet rather than with the surrounding nature. This imbalance could be in part related to inaccurate geolocation data: participants were observed spending considerable time reorienting themselves ([PERSON] et al., 2018).
In order to address these identified issues, we developed _BiodivAR1_, a free and open source (GNU GPLv3.0) web application using a user-centered design process ([PERSON] et al., 2023).
It was built using the web framework A-Frame2, for which we also created a custom library3 for the creation of WebXR location-based objects in A-Frame. We used the Leaf4 library for the interactive maps. _BiodivAR_ enables the creation and visualization of geolocated POIs in AR (see Figure 1) as well as a cartographic authoring tool for the collaborative management of AR environments (see Figure 2). They can be shared publicly with or without editing privileges. The application allows anyone without technological know-how to create AR environments by importing/exporting geospatial data and styling POIs by attaching medias to them. Medias can be location-triggered (visible/audible) according to various distance thresholds set by the author.
Footnote 3: [[https://www.ardusimple.com/product/rk-handheld-surveyor-kit/](https://www.ardusimple.com/product/rk-handheld-surveyor-kit/)]([https://www.ardusimple.com/product/rk-handheld-surveyor-kit/](https://www.ardusimple.com/product/rk-handheld-surveyor-kit/))
## 3 Research Goals
The purpose of our research _overall_ is to assess the potential benefits of using this application in the context of biodiversity education. Before introducing the tool to pupils, it seemed important to ensure its usability. This comparative user test will allow us to define and guarantee the best possible conditions of use for a younger audience. The goals of this study can be synthesized as follows:
1. Assess the overall usability of the AR application.
2. Assess the impact of geolocation data accuracy on usability, exploration, and focus.
3. Gather user feedback for future improvements4. Footnote 4: The qualitative feedbacks were not included in this paper, as we extensively focused on the quantitative data and group comparison.
The literature review and the observations made during the first iteration led us to propose the following hypothesis: Inaccurate geolocation data negatively affects usability. Additionally, we are looking to investigate the impact that geolocation data accuracy may have on exploration and focus in location-based AR, about which we have not been able to find any literature. The resulting research questions are:
Q1: Does geolocation data accuracy predict usability scores? Q2: Is geolocation data accuracy related to exploration? Q3: Is geolocation data accuracy related to focus?
## 4 Materials and Methods
### Experimental design
The present study aims to measure and compare the usability of a location-based AR application used in combination with different geolocation data sources. Using our authoring tool, we created an AR environment with POIs on biodiversity in the surroundings of the School of Engineering and Management Vaud in Yverdon-les-Bains (Switzerland). After a brief introduction to the tool, all participants freely explored the AR environment for 15 minutes using a Samsung Galaxy Tab Active3 tablet with a SIM card for cellular data. As shown in Figure 3, the comparative user test (n = 54) includes in two groups:
**GNSS**: the control group received geolocation data coming from the GNSS sensor embedded in the mobile device
**RTK**: the experimental group received geolocation data coming from an external Archusimple RTK kit5.
Footnote 5: The qualitative feedbacks were not included in this paper, as we extensively focused on the quantitative data and group comparison.
Footnote 6: Exploration is represented by the distance walked, the number of POIs visited, and the number of times the 2D map was opened.
### Participants
The sample includes 54 participants (\(\upmodels\) 21, \(\sigma\) = 33), with a mean age of M = 25.72 (SD = 4.80). They are students and collaborators of the School of Engineering and Management Vaud, and they each signed an informed consent form for the use of the data collected. Login credentials (identifier + password)
Figure 1: _BiodivAR_’s AR interface: a) view of two POIs from a distance; b) the 2D map is opened in split view; c) after entering the radius of a POI, contextual data on the adjacent plant specimen is triggered. [[https://biodivar.heig-vd.ch/](https://biodivar.heig-vd.ch/)]([https://biodivar.heig-vd.ch/](https://biodivar.heig-vd.ch/))
Figure 3: Experimental design of the comparative user test.
Figure 2: Experimental design of the comparative user test.
were created for each participant to record their data separately and facilitate comparison. Among them, 47 agreed to wear eye-tracking glasses, of which 41 successfully recorded data. They were randomly assigned to each group. The control group's (GNSS) mean age is M = 27.5 (SD = 6.09), and it includes 12 \(\upsigma\) and 15 \(\upsigma\). The experimental group's (RTK) mean age is M = 24.2 (SD = 2.22) and it includes 9 \(\upsigma\) and 18 \(\upsigma\). The first participant eventually had to be excluded from the final results because they experienced numerous crashes due to a bug that was fixed for the subsequent participants. The treatment they received was therefore too different to compare.
### Data collection and processing
The four main concepts our study seeks to connect are \"location data accuracy\", \"usability\", \"exploration\", and \"focus\". The measurable observations we chose to represent those concepts are listed in Table 1. In our experiment, the two groups (or treatments) operationalize the concept of \"geolocation data accuracy\". This concept is represented by two variables: _accuracy_ and _continuity_. The accuracy attribute is provided by the Geolocation API along with the horizontal location data as latitude and longitude9. It denotes the accuracy level of the latitude and longitude coordinates in meters. We use the average accuracy participants were exposed to while in AR mode as the indicator for accuracy. However, in the specific context of location-based AR, sudden changes in data accuracy heavily impact the display of augmented objects in the interface. An indicator for continuity in the data is thus the amount of outliers-i.e. the points that are visibly out of a user's trajectory (as shown in Figure 4). An additional indicator for continuity in the data is the standard deviation of the data accuracy the participants of each group was exposed to. As far as the concept of \"usability\" goes, it is represented by a series of nine variables whose indicators are the different scales of the three questionnaires (SUS, HARUS, UEQ): _overall usability_, _case of handling_, _case of understanding_, _attractability_, _user-friendliness_, _efficiency_, _dependability_, _motivation_, _innovativeness_. The concept of \"exploration\" is represented by three variables: _quantity_, _diversity_, and _ease_. The distance walked is the indicator of the quantity of exploration. The amount of POIs visited is the indicator of the diversity of exploration. An important use of the 2D map may indicate that participants required assistance in navigating. The amount of times the 2D map was opened is thus the indicator of the ease users had exploring. Finally, the concept of \"focus\" in our study is represented by a _screen interaction_ variable, whose indicator is the amount of time participants spent interacting with the tablet screen _versus_ with the real world.
Footnote 9: [[https://w3c.github.io/geolocation-api](https://w3c.github.io/geolocation-api)]([https://w3c.github.io/geolocation-api](https://w3c.github.io/geolocation-api))
#### 4.3.1 Geolocation data accuracy
During the test, participants' geographical coordinates were logged at 1 Hz. Each log also contains an attribute for location accuracy, user ID and a timestamp. The resulting users' trajectories can be visualized in the application (see Figure 4) and downloaded as GeoJSON files for further analysis. The color of the trajectory changes when the AR session is stopped and resumed again. We downloaded the data and calculated the mean location accuracy each participant was exposed to. As shown in Figure 4, the trajectories-in particular that of the RTK group-contained outliers, which were removed manually using the free and open source software QGIS to get a more accurate estimate of the actual distance travelled (as an indicator of our \"exploration quantity\" variable, see 4.3.3). By calculating the different amount of points before and after this manual processing, the outliers were summed for each participant. Once the data was cleaned, we calculated the total distance walked by each participant. Because there were variations in the duration of each participant's test (min = 9'14, max = 24'11 s), the data was normalized for a duration of 15 minutes. This allowed us to calculate:
1. The average geolocation data accuracy
2. The amount of outliers in the data
3. The standard deviation of the geolocation data accuracy
#### 4.3.2 Usability
Immediately after the test, participants answered an online survey containing demographic questions (age, gender), an open question for qualitative feedback, and three usability questionnaires:
* SUS (System Usability Scale) is a generic, technology-independent 10 item questionnaire on a 5 point Likert scale, frequently used for generic evaluation of a system ([PERSON], 1996). The [PERSON]'s alpha of the SUS questionnaire is 0.79, showing an appropriate internal consistency. In accordance with the instructions of the scale's authors, the SUS score is calculated as follows: 1 point was subtracted from the odd-numbered (phrased positively) items' scores. We subtracted the even-numbered (phrased negatively) items score to 5. The processed scores were added together and then multiplied by 2.5 to get an individual user's score on a scale of 100. While a comparison between two scores is self-explanatory, we used an adjective scale ([PERSON], 2009) to qualify the results individually.
\begin{table}
\begin{tabular}{|p{113.8 pt}|p{113.8 pt}|} \hline
**Concept** & **Variable** & **Indicator** \\ \hline \multirow{3}{*}{Geolocation data accuracy} & Quality & Average geolocation data accuracy \\ \cline{2-3} & & Amount of outliers \\ \cline{2-3} & Continuity & Standard deviation of data accuracy \\ \hline \multirow{6}{*}{Usability} & Overall visibility & SUS score \\ \cline{2-3} & Ease of handling & HARUS (manipublishing) score \\ \cline{2-3} & Ease of understanding & HARUS (comprehensibility) score \\ \cline{2-3} & Attractability & UEQ (attractativeness) score \\ \cline{2-3} & User-friendliness & UEQ (preductivity) score \\ \cline{2-3} & Efficiency & UEQ (efficiency) score \\ \cline{2-3} & Dependability & UEQ (dependability) score \\ \cline{2-3} & Motivation & UEQ (dimultness) score \\ \cline{2-3} & Innovativeness & UEQ (novelly) score \\ \hline \multirow{3}{*}{Exploration} & Quantity & Distance walked \\ \cline{2-3} & Diversity & Amount of POIs visited \\ \cline{1-1} \cline{2-3} & Ease & Amount of times 2D map was opened \\ \cline{1-1} \cline{2-3} & Focus & Screen interaction & Interaction time with tablet screen \\ \hline \end{tabular}
\end{table}
Table 1: Operationalization table.
Figure 4: a) A trajectory from the GNSS group. The short light green line is at an impossible location (on top of a tall building), indicating outliers. b) A trajectory from the RTK group. The star-shaped spikes indicate the presence of many outliers.
* HARUS (Handheld Augmented Reality Usability Scale) is a mobile AR-specific 16 item questionnaire ([PERSON] et al., 2014) on a 7 point Likert scale that focuses on handheld devices and emphasizes perceptual and ergonomic issues. The [PERSON]'s alpha of the HARUS questionnaire is 0.798, showing appropriate internal consistency. It has two components: _manipulability_--the ease of handling the AR system, and _comprehensibility_--the ease to read the information presented on screen. In accordance with the instructions of the scale's authors, the HARUS scores are calculated as follows: We subtracted the odd-numbered (phrased negatively) items score to 7. 1 point was subtracted from the even-numbered (phrased positively) items' scores. The processed scores for items 1 to 8 were added together, divided by 48, and multiplied by 100 to get the individual \"manipulability\" score on a scale of 100. Similarly, the processed scores for items 9 to 16 were added together, divided by 48, and multiplied by 100 to get the individual \"comprehensibility\" score on a scale of 100. HARUS was designed so that its scores are commensurable with SUS scores.
* UEQ (User Experience Questionnaire) is a 26 item questionnaire in the form of semantic differentials: each item is scored on a 7 point scale (from -3 to +3, with 0 as neutral) with two terms with opposite meanings at each extreme (i.e. attractiveness/attractive). It provides a comprehensive measure of user experience ([PERSON] et al., 2008). It includes six scales, covering classical usability aspects such as _efficiency_ (can users solve their tasks without unnecessary effort/), _perspicity_ its it easy to learn how to use the application?, and _dependability_ (does the user feel in control of the interaction?), as well as broader user experience aspects such as _attractiveness_ (do users like the application?), _novelty_ (is the application innovative and creative?), and _stimulation_ (is it exciting and motivating to use the application?). UEQ is typically routinely used to statistically compare two version of a system to check which one has the better user experience. Thus, the UEQ evaluations of both systems or both versions of a system are compared on the basis of the scale means for Each UEQ scale. _Attractiveness_ is calculated by averaging the scores from items 1, 12, 14, 16, 24, and 25. _Perspictity_ is calculated by averaging the scores from items 2, 4, 13, and 21. _Efficiency_ is calculated by averaging the scores from items 9, 20, 22, and 23. _Dependability_ is calculated by averaging the scores from items 8, 11, 17, and 19. _Simulation_ is calculated by averaging the scores from items 5, 6, 7, and 18. Novelty is calculated by averaging the scores from items 3, 10, 15, and 26. Values range between -3 (horribly bad) and +3 (extremely good), but in general only values in a restricted range will be observed. The calculation of means over a panel of participants make it extremely unlikely to observe values above +2 or below -2, as specified in the UEQ handbook ([PERSON], 2015). As per their interpretation, values between -0.8 and 0.8 correspond to a neutral evaluation of the corresponding scale and values greater than 0.8 represent a positive evaluation.
These questionnaires provided scores for the nine scales reported in Table 1 as indicators of our usability variables.
#### 4.3.3 Exploration
During the test, various in-app, user-triggered events were recorded by the application. These included: when the AR session was initiated or exited, when the 2D map was opened or closed, and when the triggering radius of a POI was entered or exited. Each log also contains the coordinates the action took place at, the user ID and a timestamp. The resulting users' action log can be visualized in the application and downloaded as GeoJSON files. Events are represented with red circles on the 2D map (see Figure 4). We downloaded the data and calculated the number of POIs each participant visited as well as how many times they opened the 2D map. These values (POIs visited, 2D map opened) were normalized for a test duration of 15 minutes. This allowed us to calculate:
1. The amount of POIs visited
2. The amount of times the 2D map was opened
The distance walked by each participant was calculated from the geolocation data (see 4.3.1).
#### 4.3.4 Focus
The goal of using eye tracking glasses and data in our study is to determine for how long participants were looking in or out of the tablet screen. 47 out of 54 participants were able-and agreed-to wear eye trackers (Tobi Pro Glasses 3), recording their gaze for the duration of the test. The 7 participants that didn't either choose not to or couldn't because they had prescription glasses. Despite rigorous implementation, 6 recordings did not work as expected and no files were saved. The 41 remaining recordings were imported in Tobii's analysis software. Unfortunately, its tools do not support tracking of moving areas of interest (i.e. the surface of the tablet). We exported the videos with the overlaying gaze point and extracted 10 frames per second, resulting in a dataset of 380K images, an instance of which is shown in Figure 5. We attempted to classify the data with openCV pattern recognition, but the variability prevented from obtaining any results. We resolved to train a deep learning multiclass image classifier model by fine-tuning a pretrained vision transformer (ViT) model with our dataset ([PERSON] et al., 2020). We first had to manually label a random selection of 10K frames with \"in\" or \"out\" labels corresponding to whether the point was or out of the tablet screen (see Figure 5). After training for only one epoch using Google's exploratory and obtaining a satisfying validity of 95%, we inferred the whole dataset which provided a label for every frame10. They were encoded in order to calculate the ratio of time each user spent looking at the tablet screen _versus_ outside of it, at the real world.
Figure 5: Eye tracking data sample. The user’s gaze is located within the tablet screen area.
## 5 Results
### Data analysis
Statistical analysis were made with the free and open platform Jamovi (The Jamovi project, 2022). In the following subsections, we report descriptive statistics (M, SD), and compare our groups (GNSS _versus_ RTK) using an independant Student _t_-test to emphasize to which extent both groups differ on our variables of interest. In cases where the homogeneity of variances assumption is not met, we used a Welch _t_-test, which is more robust11.
Footnote 11: The data is available here: [[https://zenodo.org/record/7845707](https://zenodo.org/record/7845707)]([https://zenodo.org/record/7845707](https://zenodo.org/record/7845707)).
### Geolocation data accuracy
#### 5.2.1 Average geolocation data accuracy
As shown in Figure 6, the mean accuracy for the GNSS group is M = 11.0 (SD = 15.3), and M = 33.6 (SD = 24.8) for the RTK group. The value is in meters, meaning the data the GNSS group was exposed to was accurate within a 11 meters radius, whereas the RTK group got data accurate within a 33.6 meters radius. A Welch _t_-test was used. The results show a significant difference between the two groups ((43.5) = -3.99, p = <.001).
#### 5.2.2 Outliers
As shown in Figure 7, the GNSS group trajectories contained M = 7.2 (SD = 7.55) outliers, and these of the RTK group M = 46.8 (SD = 40.1). A Welch _t_-test was used. The results show a significant difference between the two groups ((27.9) = -5.04, p = <.001).
#### 5.2.3 Standard deviation geolocation data accuracy
As shown in Figure 8, the data participants from the GNSS group were exposed to had a standard deviation of M = 32.0 (SD = 77.7), and that of the RTK group M = 168.3 (SD = 120.1). A Welch _t_-test was used. The results show a significant difference between the two groups (t(44.7) = -4.93, p = <.001).
### Usability
The means of each group for all nine scales from the three usability questionnaires are reported in Table 2 along with _t_-test's p values for significance assessment.
#### 5.3.1 Sus
As shown in Figure 9, the mean SUS score for the GNSS group is M = 81.7 (SD = 9.74). The mean SUS score for the RTK group is M = 74.4 (SD = 12). The results show a significant difference between the two groups (t(51) = 2.45, p = 0.018).
#### 5.3.2 Harus
On the _manipulability_ scale (indicating ease of handling the AR system), the mean score for the GNSS group is M = 76.7 (SD = 13) and that of the RTK group is M = 68.1 (SD = 16.1), as shown in Figure 10. The results show a significant difference between the two groups (t(51) = 2.13, p = 0.038). On the _comprehensibility_ scale (indicating ease of understanding information presented in the AR interface), the mean score for the GNSS group is M = 78.3 (SD = 11.3) whereas the mean score and that of the RTK group is M = 74.9 (SD = 12.9). The results _do not_ show any significant difference between the two groups (t(51) = 1.01, p = 0.318).
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Scale** & **GNSS** & **RTK** & _t_-test** \\ \hline
**SUS** & M = 81.7 & M = 74.4 & t(51) = 2.45, \\ & SD = 97.4 & SD = 12.0 & p = 0.018 \\ \hline
**HAREUS** (manipulability) & M = 76.7 & M = 68.1 & t(52) = 2.13, \\ & SD = 13 & SD = 16.1 & p = 0.038 \\ \hline
**HAREUS** (comprehensibility) & M = 78.3 & M = 74.9 & t(51) = 1.01, \\ & SD = 11.3 & SD = 12.9 & p = 0.318 \\ \hline
**UEQ** (attractiveness) & M = 12.7 & M = 1.1 & t(51) = 2.65, \\ & SD = 0.7 & SD = 0.98 & p = 0.011 \\ \hline
**UEQ** (pepulicity) & M = 20.2 & M = 14.55 & t(46) = 2.61, \\ & SD = 0.64 & SD = 0.92 & p = 0.012 \\ \hline
**UEQ** (efficiency) & M = 1.24 & M = 0.85 & t(51) = 5.88, \\ & SD = 0.85 & SD = 0.94 & p = 0.121 \\ \hline
**UEQ** (dependability) & M = 1.17 & M = 1.02 & t(51) = 0.87, \\ & SD = 0.68 & SD = 0.62 & p = 0.39 \\ \hline
**UEQ** (stimulation) & M = 18.4 & M = 1.31 & t(51) = 1.93, \\ & SD = 0.84 & SD = 11.11 & p = 0.059 \\ \hline
**UEQ** (novelty) & M = 1.8 & M = 1.21 & t(51) = 2.45, \\ & SD = 0.85 & SD = 0.98 & p = 0.018 \\ \hline \end{tabular}
\end{table}
Table 2: Usability results by group and _t_-tests.
Figure 8: Standard deviation geolocation data accuracy by group.
Figure 6: Geolocation data accuracy by group.
Figure 7: Amount of outliers by group.
#### 5.3.3 Ueq
As shown in Figure 11, on the _attractiveness_ scale, the mean score for the GNSS group is M = 1.72 (SD = 0.7) and that of the RTK group is M = 1.1 (SD = 0.98). The results show a significant difference (t(51) = 2.65, p = 0.011). On the _perspectivity_ scale, the mean score for the GNSS group is 2.02 (SD = 0.64) and that of the RTK group is 1.45 (SD = 0.92). A Welch _t-_test was used. The results show a significant difference between the two groups (t(46.7) = 2.61, p = 0.012). On the _efficiency_ scale, the mean score for the GNSS group is 1.24 (SD = 0.85) and that of the RTK group is 0.85 (SD = 0.94). The results _do not_ show any significant difference (t(51) = 1.58, p = 0.121). On the _dependability_ scale, the mean score for the GNSS group is 1.17 (SD = 0.68) and that of the RTK group is 1.02 (SD = 0.62). The results _do not_ show any significant difference (t(51) = 0.87, p = 0.39). On the _stimulation_ scale, the mean score for the GNSS group is 1.84 (SD = 0.84) and that of the RTK group is 1.31 (SD = 1.11). The results _do not_ show any significant difference (t(51) = 1.93, p = 0.059). On the _novelty_ scale, the mean score for the GNSS group is 1.8 (SD = 0.85) and that of the RTK group is 1.21 (SD = 0.89). The results show a significant difference (t(51) = 2.45, p = 0.018).
### Exploration
#### 5.4.1 Distance walked
As shown in Figure 12, the GNSS group walked an average distance of M = 586.15 (SD = 96.24) meters, whereas the RTK group walked an average distance of M = 525.94 (SD = 71.9) meters. The results show a significant difference (t(51) = 2.59, p = 0.013).
#### 5.4.2 POIs visited
The GNSS group visited an average of M = 21.09 (SD = 4.02) POIs, whereas the RTK group visited an average of M = 19.29 (SD = 5.87). The results _do not_ show any significant difference (t(51) = 1.30, p = 0.199).
#### 5.4.3 Map opened
The GNSS group opened the 2D map M = 2.83 (SD = 2.24) times in average, whereas the RTK group opened it M = 1.91 (SD = 2.41) times. The results _do not_ show any significant difference (t(51) = 1.44, p = 0.157).
### Focus
The GNSS group spend an average M = 73.3% (SD = 9.81) of the time looking at the tablet screen. The RTK group spend an average M = 69.2% (SD = 12.4) of the time looking at the tablet screen. The results _do not_ show any significant difference (t(51) = 1.16, p = 0.251).
## 6 Conclusions
The purpose of the study was to assess the impact of geolocation data on the usability of our location-based AR system. To test our hypotheses, we exposed the participants to different geolocation data sources with significantly different accuracies. While we expected RTK data to be more accurate and that it
Figure 11: UEQ scores by group.
Figure 12: Distance walked by group.
Figure 10: HARUS scores by group.
Figure 9: SUS scores by group.
would enable us to improve usability, analysis highlights that it was significantly less accurate and less continuous than GNSS data. This appears to be due to the fact that the embedded GNSS sensor contains filters that preprocess data and remove most of the outliers. In contrast, RTK data purposefully remains \"raw\", which is valuable for an advanced user. RTK data accuracy is very efficient when used on an isolated basis (ie. at a 2D map scale), but not particularly suitable for a real-time continuous usage (where location is measured several times per second) on a 1:1, tridimensional scale, at least without any filters applied onto it. Despite this contingency, both the quality and continuity of the geolocation data accuracy the two groups were exposed to was significantly different, which is the essential premise for testing our hypothesis and addressing our research questions. Regarding our main research question, results reveal that the GNSS group, who used the AR application in combination with more accurate and continuous data, reported higher scores in all usability scales, of which five out of nine were statistically significant. This supports our initial hypothesis that poor data accuracy negatively impacts the usability of a location-based AR system. Futures studies should however investigate whether RTK data with proper outlier processing may actually better usability. Our results further highlight that the GNSS group walked more than the RTK group, revealing that the accuracy of geolocation data was partially related to exploration, at least for the quantity indicator. However, due to the manual removal of the outliers-which were significantly more frequent in the RTK group-from the trajectories, the data could be biased. It would be necessary to record a trajectory with both modalities, remove the outliers and observe if there are not significant difference between the measurements to ensure that there are no bias. The comparison on the exploration diversity indicator (amount of POIs visited) was not significantly different. Additionally, although the difference was not significant, the GNSS group opened the 2D map more often than the RTK group in average, suggesting the RTK group could have had more ease exploring. Our results further highlight that there were no significant difference between the ratio of time participants from each group spent interacting with the tablet screen, which would indicate that there is no particular relation between the accuracy of geolocation data and focus.
Although the two experiments cannot be properly compared, because the tests took place 5 years apart under different conditions, we note that participants spent 69.2%-73.3% of the time looking at the tablet screen, which seems to be a meaningful longitudinal progress from the measurement that was made on our 2017 proof-of-concept, where participants interacted with the screen for 88.5 % of the time ([PERSON] et al., 2018). While we are not aware of a method to determine the ideal proportion, this measure overall remains an interesting indicator of the importance of the tablet in this type of activity. In a wide review of mobile learning projects, technology was found to dominate the experience in a problematic way in 70% (28/38) of the cases ([PERSON] et al., 2006). While using RTK data did not allow us to positively impact the usability of our system, our study however demonstrated the impact of varying geolocation data accuracy on usability and exploration. The immediate benefit of performing this comparative study is for us to define the most suitable conditions of use before offering our system to a young audience, as well as to ensure an adequate overall level of usability. The overall score reported by the GNSS group allows us to qualify the application's usability as \"excellent\" according to the SUS adjective scale (Bangor, 2009).
## 7 Acknowledgements
The authors thank [PERSON] for his help with the organization of the tests and the eye tracking data collection. Study participation was voluntary, and written informed consent to publish this paper was obtained from all participants involved in the study. Participants were informed that they could withdraw from the study at any point. The data presented in this study is openly available on Zenodo at [[https://zenodo.org/record/7845707](https://zenodo.org/record/7845707)]([https://zenodo.org/record/7845707](https://zenodo.org/record/7845707)). This research was funded by the Swiss National Science Foundation (SNSF) as part of the NRP 77 \"Digital Transformation\" (project number 407740_187313) and by the University of Applied Sciences and Arts Western Switzerland (HES-SO): Programme strategique \"Transition numerique et enjeux societaux\". The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
## References
* [PERSON] et al. (2011) [PERSON], [PERSON], [PERSON], [PERSON] t., 2011. The concept of flow in collaborative game-based learning. _Computers in Human Behavior_, 27(3), 1185-1194. doi.org/10.1016/j.chb.2010.12.013.
* [PERSON] et al. (2022) [PERSON], [PERSON], [PERSON], 2022. A Review of Extended Reality (XR) Technologies in the Future of Human Education: Current Trend and Future Opportunity. _Journal of Human Reproductive Sciences_, 1, 81-96. doi.org/10.11113/humentech.v1n2.27.
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], [PERSON], 2021. Mobile Augmented Reality and Outdoor Education. _Builil Environment_, 47(2), 223-242. doi.org/10.2148/penv.47.2.223.
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2014. Augmented Reality Trends in Education: A Systematic Review of Research and Applications. _Journal of Educational Technology & Society_, 17(4), 133-149. jstor.org/stable/jedotechosci.17.4.133.
* JUX. _JUX
- The Journal of User Experience_. uxpajournal.org/determining-what-individual-sus-scores-mean-adding-an-adjective-rating-scale/.
* [PERSON] et al. (2010) [PERSON], [PERSON], [PERSON], [PERSON], 2010. Promoting the Use of Outdoor Learning Spaces by K-12 Inspective Science Teachers Through an Outdoor Professional Development Experience. [PERSON], [PERSON], [PERSON] (eds), _The Inclusion of Environmental Education in Science Teacher Education_, Springer Netherlands, Dordrecht, 97-110.
* [PERSON] and [PERSON] (2013) [PERSON], [PERSON] [PERSON], 2013. A mixed methods assessment of students' flow experiences during a mobile augmented reality science game. _Journal of Computer Assisted Learning_, 29(6), 505-517. doi.org/10.1111/jcal.12008.
* [PERSON] (1996) [PERSON], [PERSON], 1996. SUS: A 'Quick and Dirty' Usability Scale. _Usability Evaluation In Industry_, CRC Press.
* [PERSON] et al. (2013)[PERSON], [PERSON], [PERSON], [PERSON], 2014. An Augmented Reality-based Mobile Learning System to Improve Students' Learning Achievements and Motivations in Natural Science Inquiry Activities. _Journal of Educational Technology & Society_, 17(4), 352-365. jstor.org/stable/jeductechosci.17.4.352.
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], 2015. Preparing augmented reality learning content should be easy: UNED ARLE-an authoring tool for augmented reality learning environments. _Computer Applications in Engineering Education_, 23(5), 778-789. doi.org/10.1002/cae.21650.
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2018. Enhancing cultural tourism by a mixed reality application for outdoor navigation and information browsing using immersive devices. _IOP Conference Series: Materials Science and Engineering_, 364, 012048. doi.org/10.1088/1757-899X/364/1/012048.
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2020. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. doi.org/10.48550/ARXIV.2010.11929.
* [PERSON] et al. (2009) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2009. Affordances and Limitations of Immersive Participatory Augmented Reality Simulations for Teaching and Learning. _Journal of Science Education and Technology_, 18(1), 7-22. doi.org/10.1007/s10956-008-9119-1.
* [PERSON] (2020) [PERSON], 2020. _Augmented reality in education: a new technology for teaching and learning_. Springer International Publishing.
* [PERSON] et al. (2006) [PERSON], [PERSON], [PERSON], 2006. _The Focus Problem in Mobile Learning_. IEEE, Athens.
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2018. _Augmented reality technologies for biodiversity education--a case study_. 12-15 June 2018.
* [PERSON] et al. (2008) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2008. _Construction and Evaluation of a User Experience Questionnaire_. Lecture Notes in Computer Science, Springer, Berlin, Heidelberg.
* [PERSON] et al. (2012) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2012. _CityWeAR: A mobile outdoor AR application for city visualization_.
* [PERSON] et al. (2023) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2023. BiodivAR: A Cartographic Authoring Tool for the Visualization of Geolocated Media in Augmented Reality. _ISPRS International Journal of Geo-Information_, 12(2), 61. doi.org/10.3390/ijgi12020061.
* [PERSON] et al. (2011) [PERSON], [PERSON], [PERSON], [PERSON], 2011. Research Note: The Results of Formatively Evaluating an Augmented Reality Curriculum Based on Modified Design Principles. _International Journal of Gaming and Computer-Mediated Simulations (IJGCMS)_, 3(2), 57-66. doi.org/10.4018/jgems.2011040104.
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], 2017. An adoption framework for mobile augmented reality games: The case of Pokemon Go. _Computers in Human Behavior_, 76, 276-286. doi.org/10.1016/j.chb.2017.07.030.
* [PERSON] and [PERSON] (2013) [PERSON], [PERSON], 2013. Off the paved paths: Exploring nature with a mobile augmented reality learning tool. _Journal of Mobile Human Computer Interaction_, 5(2), 21-49. doi.org/10.4018/jmhci.2013040102.
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2014. _A usability scale for handheld augmented reality_. VRST '14, Association for Computing Machinery, New York, NY, USA.
* [PERSON] (2015) [PERSON], 2015. _User Experience Questionnaire Handbook_.
* The jamovi project (2022) The jamovi project, 2022. jamovi Software, Version 2.3. jamovi.org.
|
isprs
|
IMPACT OF GEOLOCATION DATA ON AUGMENTED REALITY USABILITY: A COMPARATIVE USER TEST
|
J. Mercier, N. Chabloz, G. Dozot, C. Audrin, O. Ertz, E. Bocher, D. Rappo
|
https://doi.org/10.5194/isprs-archives-xlviii-4-w7-2023-133-2023
| 2,023
|
CC-BY
|
isprs/e3aa18a8_2d3d_4c16_8341_9e5775edee64.md
|
Application of Surface Deformation Monitoring in Mining Area by the Fusion of Insar and Laser Scan Data
[PERSON]
Corresponding author
[PERSON]
Corresponding author
[PERSON]
Corresponding author School of Environment Science and Spatial Informatics, China University of Mining and Technology, Xuzhou 221116, China [EMAIL_ADDRESS]
[PERSON]
School of Environment Science and Spatial Informatics, China University of Mining and Technology, Xuzhou 221116, China [EMAIL_ADDRESS]
###### Abstract
Differential Synthetic Aperture Radar Interferometry (D-InSAR) as a new earth observation technique has become an important tool for monitoring ground movements caused by underground coal mining. However, the low resolution and accuracy of Digital Elevation Model (DEM) cause more error value of InSAR line-of-sight(LOS) surface deformation measurement. In this paper, a couple of Radarsat-2 and a pair of TerrasSAR with SRTM, GDEM and LiDAR DEM are processed respectively to reveal the subsidence basin and the results have been compared each other. It illustrate that the accuracy of D-InSAR results been better improved by the high accuracy and resolution DEM.
D-InSAR, laser scan data, SRTM, GDEM, deformation, data fusion +
Footnote †: Corresponding author
## 1 Introduction
Ground surface movements commonly cause disturbance and damage for buildings and environment around the subsidence areas. The knowledge and prediction of the evolution of the temporal and spatial distribution of the movements are essential to delineate the most affected areas, to understand the mechanisms involved, and to establish counter measures to prevent damages. Underground mining activities always cause subsidence of the ground surface due to the advance of the excavation fronts and the progressive closure or collapse of the mineral extraction galleries. The magnitude of the displacements depends on different parameters, such as the depth of the mining galleries and the time elapsed since the onset and/or the abandonment of the excavation ([PERSON], 1995; [PERSON], 1995). The evolution of the subsidence of a point has been described by many authors through total station, levels and global positioning system etc.
In recent years, a new earth observation technique named Differential Synthetic Aperture Radar Interferometry (D-InSAR) has become an important tool for monitoring temporal and spatial ground movements. This method has clear advantages over classical monitoring methods, the first one is its high spatial coverage. Classical techniques measure ground displacements at a few discrete points while D-InSAR provides a more complete pattern of the displacements' field with measurements over a wide area. Another advantage of the technique is the existence of a historical database of SAR images, which was started more than several decades ago, that enables the study of past situations([PERSON] 2013).Differential Interferometric Synthetic Aperture Radar (D-InSAR) is a technique for utilizing the phase difference between two SAR images acquired before and after the event with different look angles and a topographic signal from a DEM as correction to reveal the surface subsidence between acquisition times of two images. These techniques have been successfully applied to detect and measure ground subsidence in areas subjected to underground mining exploitation ([PERSON] et al., 2008; [PERSON] et al., 2009).In traditional, a resolution of 90m SRTM (Shuttle\(\mathcal{Q}_{noise}\) = the noise term due to variability in scattering from the pixel, thermal noise and coregistration errors
How to get \(\mathcal{O}_{disp}\) is the problem to be resolved by D-InSAR technique, and \(\mathcal{O}_{popo}\) is the main factor to be removed.
### SRTM data
The NASA Shuttle Radar Topographic Mission (SRTM) has provided digital elevation data (DEMs) for over 80% of the globe([PERSON], [PERSON], 2008).The SRTM digital elevation data, is a major breakthrough in digital mapping of the world, and provides a major advance in the accessibility of high quality elevation data for large portions of the tropics and other areas of the developing world. The SRTM data is available as 3 arc second (approx. 90m resolution) DEMs. A 1 arc second data product was also produced, but is not available for all countries. The vertical error of the DEM's is reported to be less than 16m.As shown in Figure 3(a), the SRTM is been resized to 3 meters in one pixel.
### GDEM data
On June 29, 2009, NASA and theMinistry of Economy, Trade, and Industry (METI) of Japan released a Global DigitalElevation Model (GDEM) to users worldwide at no charge as a contribution to theGlobal Earth Observing System of Systems(Aster, 2009). This GDEM was found to have an overall accuracy of around 20-m at the 95 percent (%) confidence level.NASA and METI released a second version of the ASTER GDEM (GDEM 2) in mid-October, 2011.GDEM 2 is a 1 arcsecond elevation grid divided and distributed as 1\({}^{\circ}\)x 1\({}^{\circ}\) tiles, has an overall accuracy of around 17-m at the95% confidence level, and a horizontal resolution on the order of 75-m. The GDEM over the study area which has been resized to 3 meters in one pixel is shown in Figure 3(b).
### LiDAR data
Light Detection and Ranging(LiDAR) Digital Elevation Model(DEM) data over the caving face is mosaicked by 13 filtered images of point cloud monitored by Lecia Scnstation C10.The vertical accuracy is about 0.2m and the horizontal accuracy is 0.5m. According to the resolution of TerrASAR and Radarsat-2, the LiDAR DEM is about 3 meters by 3 meters each pixel. The LiDAR DEM is shown in Figure 3(c).
### SAR data
Two imagines of TerraSAR of X-Band acquired On April 18, 2015 and April 29, 2015 with 11 days interval and two imagines of Radarat-2 of C-Band acquired On April 4, 2015 and April 28, 2015 with 24 days interval are used in this study. The baseline of a couple of TerraSAR imagine which the resolution is 3.3 meters in azimuth and 2.6 meters in range is about 53 meters and Radarsat-2 which the resolution is 2.9 meters in azimuth and 2.6 meters in range is 98 meters.
## 3 Results and Disscution
The couple of Radarsat-2 imagines in24 daytime interval imaged on April 4, 2015 and April 28, 2015 and the couple of TerraSAR images in 11 days time interval imaged on April 18, 2015 and April 29, 2015 are processed with different DEM (such as SRTM, GDEM and LiDAR DEM) by Sarscape module of Environment for Visualizing Images (ENVI)software. The processing is shown in figure 1 and the results shown in Figure 2.
The results generated by the couple of Radarsat-2 imagines in 24 days time interval with different DEM are shown in Figure 2(a),(b),(c).The maximum surface subsidence is about -0.06 meter in Figure 2(a), -0.061 meter in Figure 2(b) and -0.066 meter in Figure 2(c).Figure 2(d),(e),(f) which shown are the results of the couple of TerraSAR images in 11 days time interval with different DEM. In Figure 2(d) the maximum surface subsidence is about -0.043 meter, in Figure 2(e) is about -0.043 meter and -0.044 meter in Figure 2(f).The subsidence basin caused by coal mining is clearly shown in the top right corner of Figure 2, either generated by Radarsat-2 or TerraSAR.
Compared with Figure 2(a),(b)and(c), the error value of InSAR line-of-sight(LOS) surface deformation measurement is less in Figure 2(c).This is because the LiDAR DEM is more accuracy than SRTM and GDEM. In Figure 3, it is clearly that the maximum elevation of GDEM is higher than SRTM and LiDAR DEM. Maybe the quality of GDEM in this study area is the worst compared with SRTM and LiDAR DEM. From the results generated by Radarsat-2 and TerraSAR, it found that the area where uplift are larger in TerraSAR than in Radarsat-2. It is because that the noisy caused by surface vegetation is more sensitive in X-Band than in C-Band.
Figure 1: Processing of D-InSAR in Sarscape module
As mentioned in [PERSON]([PERSON], X., 2012), the accuracy of InSAR line-of-sight(LOS) surface deformation measurement depend on the accuracy of geography. The \(\Delta r\) is defined by
\[\Delta r\ =\ \frac{\B_{\perp}}{\rho\ \mathrm{s\,in\,}\theta}\ \partial H \tag{2}\]
where \(\Delta r\) = the accuracy of InSAR line-of-sight(LOS) surface deformation measurement
\(\B_{\perp}\) = the perpendicular baseline
\(\rho\) = the range from antenna in the satellite to target on the earth surface
\(\theta\) = the incidence angle
Figure 3: Different DEM figures with resolution of 3 meters (a) SRTM (b) GDEM (c) LiDAR DEM
Figure 2: Displacement figures of Radarast-2 and TerraSAR with different DEM
\(\partial H\) = the error value of geography
In this study, the relation between \(\partial H\) and \(\Delta I\) of TerraSAR with baseline about 98 meters and Radarsat-2 with baseline about 53 meters are shown in Figure 4(a), Figure 4(b) respectively.
The error value of InSAR line-of-sight(LOS) surface deformation measurement increase as well as the error value of geography. Take Radarsat-2 for example, when the error value of geography is about 10 meters, the error value of LOS is about 1.5 millimeters with the baseline about 98 meters in this study. In Figure 2, the error value of LOS caused by the error value of geography is more serious for Radarsat-2 than for TerraSAR, the reason is that the baseline of Radarsat-2 is 98 meters and the TerraSAR is only 53 meters and the error value of LOS is more sensitive to long baseline than short baseline.
## Acknowledgements
The authors would like to acknowledge Mr [PERSON], the third surveying and mapping institute, Hebei Bureau of Geoinformation, for providing the laser scan data. This work was supported by\"Geographical Conditions of Service Oriented the Surface Subsidence Monitoring caused by Resources Exploitation Program(No. 201412016)\"from National Administration of Surveying, Mapping and Geoinformation, China.
## References
* [PERSON] (1995) [PERSON], 1995.,Effects of Mining Subsidence Observed by Time-lapse Seismic Reflection Profiling. University of Durham..
* [PERSON] (2009) [PERSON], 2009.G. D. E. M. Validation Team. ASTER Global DEM.
* [PERSON] (1995) [PERSON], 1995. Subsidence studies in Indian coalfields by a semi-empirical approach. Proceedings of the Fifth International Symposium on Land Subsidence, The Hage, pp. 127-133.
* [PERSON] (2013) [PERSON], 2013.Large-scale deformation monitoring in mining area by D-InSAR and 3D laser scanning technology integration. International Journal of Mining Science and Technology, 23(4), 555-561.
* [PERSON] (2011) [PERSON], 2011. Land subsidence monitoring by D-InSAR technique, Mining Science and Technology (China), 21(6), 869-872.
* [PERSON] (2009) [PERSON], 2009. Monitoring residual mining subsidence of Nord/Pas-de-Calaiscool basin from differential and Persistent Scatterer Interferometry (Northern France).J. Appl. Geophys. 69 (1), 24-34.
* [PERSON] (2012) [PERSON], 2012. Earth observation data processing method by InSAR and the comprehensive measurement. Science Press, China.
* [PERSON] (2008) [PERSON], 2008.Hole-filled SRTM for the globe Version 4.available from the CGIAR-CSI SRTM 90m Database ([[http://sfrm.csi.cgi.org](http://sfrm.csi.cgi.org)]([http://sfrm.csi.cgi.org](http://sfrm.csi.cgi.org))).
* [PERSON] (2008) [PERSON], 2008.Application of DINSAR and GIS for underground mine subsidence monitoring. Int. Arch Photogramm Remote Sens. Spot. Inf. Sci. 37, 251-256.
* [PERSON] (2014) [PERSON], 2014.Analysis of the evolution of ground movements in a low densely urban area by means of D-InSAR technique. Engineering Geology, 170, 52-65.
Figure 4: The relation betweenthe accuracy of InSAR line-of-sight(LOS) surface deformation measurement andthe error value of geography for (a) TerraSAR and (b) Radarsat-2.
|
isprs
|
Application of Surface Deformation Monitoring in Mining Area by the Fusion of InSAR and Laser Scan Data
|
J. L. Huang, K. Z. Deng, H. D. Fan, J. K. Yang
|
https://doi.org/10.5194/isprsarchives-xl-7-w4-41-2015
| 2,015
|
CC-BY
|
isprs/0f32a438_cc64_454f_a55b_25fc1a423892.md
|
# Typology of Historical Houses in Muzzaffarid Era: Case Study of Ardakan City, Yazd, Iran
[PERSON]
Corresponding author
###### Abstract
MOZAFFARIDS established the [PERSON] dynasty in Yazd, Iran. This era witnessed a development in architectural and decorative features of Yazd buildings. Ardakan, in particular, enjoyed a period of prosperity in the 14 th century, which led to a flourishing growth of architectural production. The present article uses a descriptive-analytical and historical-comparative method to investigate the typology of 12 historical houses of Ardakan city in the Muzzaffarid era. By using literature review and field studies, four of these houses have been studied in detail in terms of architectural and decorative features and construction methods. The results of the study show that Mozaffarid houses in Ardakan have certain and distinguishable patterns and follow a general rule. Main Iwan as an outstanding feature in Mozaffarid houses, as well as a central courtyard and a Soffich in front of the Iwan, repeated in all houses and other parts, are formed in their surroundings. With the change in the location of the main Iwan in the northern or southern part of the central courtyard and the fact that whether or not there is a garden, significant differences in organization and the quality of spaces have been made. Mozaffarid houses in Ardakan can be described as two main types each of which can be divided into two subcategories based on the Iwan position. The knowledge of typological characteristics of these historical architecture needs to be gathered to preserve the built heritage and a comprehensive document is essential for the preservation and conservation of the houses.
H +
Footnote †: Corresponding author
## 1 Introduction
Historical vernacular housing has always been designed with respect to nature; incorporating and reflecting the local lifestyle and cultural conditions as well as being a direct expression of the state of know-how of construction techniques, the availability of local construction materials and local climatic ([PERSON], 1976). Today, historical settlements and their rather homogeneous housing typologies can still be found and studied in the case of preserved contexts and buildings in Iran. Iran is a rich country in vernacular architecture. Despite the losses due to frequent earthquakes and large-scale planning projects, historical towns still contain thousands of houses. Until recently, there have been little attempts to record Iranian vernacular buildings; even less to analyze or explain their architecture. The houses before the [PERSON] era have been the most unknown ones in comparison to other eras. So, the present study aims to investigate the historical houses in the Muzzaffarid era, which is one of the most significant eras of Yazd history. Ardakan is one of the oldest cities of Iran containing a lot of old houses in its historical context. Its historical houses, having remained largely undocumented, are the most important samples to represent the lifestyle of the past. In Ardakan County, some elegant samples of Muzzaffarid houses have been identified needing detailed investigation. This article aims to investigate the typology of Ardakan historical houses in the Mozaffarid era. In this regard, the spatial organization of 12 historical houses located in the historical context of Ardakan was studied four of which are elaborated in terms of architectural, decorative features, and construction methods. These buildings have features, which are the same in most samples and are unique to the architecture of the era; while there is considerable variation, spatially and physically, from one house to another. Thus, this paper is supposed to classify the historical houses of Ardakan based on the typological method.
### Muzzaffarids and Architectural Legacy in Yazd
The **[PERSON]** (Al-e Mozaffar) is a Sunni family coming to power in central Iran in the fourteenth century and the family of governors of Yazd under the [PERSON] (1256-1335/1353), who expanded their domain after the collapse of the Il-Khanid power and established the [PERSON] dynasty in Yazd, Kerman, Fars, and Erda-e Ajam. The [PERSON], enduring until its destruction by [PERSON] ([PERSON]) in 795/1393, originated as an Arab family settling in Khorasan. They stayed in Khorasan up until the Mongol invasion of that province when they fled to Yazd. Serwing under the [PERSON], they gained prominence when [PERSON] was made the governor of Maybod ([PERSON], 2014; New World Encyclopedia contributors, 2009). [PERSON] says that the Muzzaffarids \"are remembered as cultural patrons\" ([PERSON], 2007). Muzzaffarid era in Yazd has been one of the most important ears in the history of the region as it was for the first time that a dynasty ruled the southern and central parts of Iran for more than a half-century ([PERSON], 1993). In this era, most of the artists and scientists settled in Yazd to avoid Mongol invasion and to pursue their academic attempts. Partial security and peace in Yazd and the attention of Il-Khanids to Muzzaffarids led to the high scientific artistic interaction between these two governments. Moreover, the collection of scientists and artists in Yazd, as well as communication about architectural techniques and decorations, led to the development of architecture in this era. The special features of the architecture and decorations of buildings in Muzzaffarid era led to the creation of a local school of thought, or style, of architecture. It is called Muzzaffarid school of thought or Yazdi style ([PERSON], p.101). The distinctive feature of the Muzzaffarid style was the use of \"large transverse arches\" supporting \"barrel vaults\" such as those added to the mosque at Yazd ([PERSON], 1996). Muzzaffarids made a sizeablenumber of personal and charity buildings especially in Yazd, Meybod, and Ardakan some of which still exist. Although the Muzaffarid rulers did not earn the type of fame that makes their names universally known, the dynasty did give its name to culture and architecture.
## 2 Materials and Methods
### Geographical and Historical area of study
The geographical area of this study is Ardakan County, which is the second major city of Yazd Province located on the north side of the province and in the middle of the central desert of Iran (Figure 1). The proximity of Ardakan to the central desert of Iran has led to the high effect of desert weather on this region; winters are cold with low precipitation and the summers are hot and dry. The per capita of precipitation is 62.9 mm and the average temperature is 20.2 degrees. Lack of water is one of the most serious limitations in the city.
In Muzaffarid era, Ardakan was one of the villages of Meybod city ([PERSON], 1966, p. 160) and the 14 th century was one of the most decisive years for Ardakan due to the ruling of Muzaffarids in Meybod during which construction boomed. Generally; being located in the center of Iran and far from the boundaries, partial and consecutive security throughout the history of the region, benefiting from conservative peaceful rulers, dry and unfavorable weather, and being protected from natural disasters like flood and earthquake have saved the region from subversive happenings and made Ardakan county be protected from some of the rare samples of architecture in this era ([PERSON], [PERSON], 2013, p. 105). In figure 2, the historical urban fabric of Ardakan and the location of the four houses mentioned in the study are shown.
### Methodology
By definition, typification is the action of typifying, i.e., dividing/distinguishing into types. The concept of type refers to the overall or the set of properties common to some individuals or objects recognizing structural similarities between architectural objects ([PERSON] et al., 2013). According to [PERSON] (1999), a type is the organic ensemble of the common characteristics of buildings in a defined culture over a specific period. The methodology used involved quantitative and qualitative analysis of the building typology of Muzaffarid houses in Ardakan. In this study, understanding the location and position of spaces and architectural elements especially unique spaces of the Muzaffarid houses including Iwan, Soffeh, and garden were intended. To do so, by using a descriptive-analytical research method and studying the spatial organization diagrams as well as studying literature review and field studies; the typology of 12 houses is recognized. Four of these houses were thoroughly analyzed according to fundamental spaces, materials, construction techniques, and decorations. Also, the various spatial characteristics are clarified by the use of graph representation, dimensionless plans, and axial diagrams.
## 3 Results and Discussion
### Mozaffarid Houses in Ardakan
Some parts of the Muzaffarid cultural heritage, which are of high importance in architectural features and are identified in some historical neighborhoods of Ardakan city are Muzaffarid houses. The houses in this era, as the oldest remained houses in Iran, reveal the construction pattern formed in the 14 th century, which continued until the [PERSON] dynasty. The general pattern of studied houses has been repeated despite some differences in the location of spaces. All of these houses are built around a small central courvard. There is an Iwan in one of the southern or northern sides of the central courvard and on the opposite side, there is a Soffeh. On the east and west sides, there are 2 small Soffeh and two doorways. One of the doorways connects the entrance corridor to the courvard and the other one is the doorway of a room. In the back part of the main Iwan, there is a Tanabi room or a garden and on each eastern and western side of that, there are rooms in two stories. Since visual protection is critical because of privacy, attention is also given to the patterns of entry and access to and from the central courvard. In table 1, the data regarding all identified Muzaffarid houses in Ardakan have been displayed. This table consists of the information about all samples of the investigation. In these diagrams, the main parts such as semi-open spaces (Soffeh and Iwan), open spaces (the central courvard and garden), and close spaces such as service spaces, living spaces, and adjunct spaces have been shown by using different colors. The vertical axis represents the northeast-southwest direction.
### Architectural features (Spatial and Functional Organization)
In this section, characteristics of site and main spaces as well as seven functional features of plan are analyzed and evaluated as it is shown in Tables 2 and 3. The most important spaces of investigated houses include entrance corridor and Pishgah (Entrance hall), Main Iwan, Soffeh (in front of Iwan), the space behind the Iwan (Tanabi or Soffeh), the courvard and garden, and the western and eastern rooms. These houses usually lack the basement.
Figure 1: Location of Yazd Province in Iran; Location of Ardakan city in Yazd Province.
Figure 2: Location of historical urban fabric in Ardakan city; Distribution of studied Muzaffarid houses in historical urban fabric of Ardakan.
**- The Characteristics of the Site:** In all investigated houses, except for Asari house, the land is rectangular and in the north-south axis with a 10 to 12 degree of deviation to east. This orientation in accordance with Room Raste1 in a way that the longer side of the land is in the north-south axis and the shorter is in the east-west axis. This orientation is predominant in the historical urban fabric of Adrakan. As well as different climatic factors, the orientation has been affected by the rules of farms and gardens irrigation network. As the lands in Ardakan has south to north slope, Qanats2 flow from south to north and the farmlands direction is the same ([PERSON], [PERSON], 2007, p. 165).
Footnote 1: In Iranian architecture, Room refers to the direction of the building. Room Raste stands in the northeast-southwest direction.
Footnote 2: A gently sloping underground channel to transport water from an aquifer or water well to surface for irrigation and drinking.
- **Anfrance Corridor and Pishgah:** All these houses have a Pishgah. In some cases, however, the Pishgah has been destroyed or had a small area. The pishgah is often simple without much decoration. After the Pishgah, there is a corridor making it available to access service areas, stables, and staircase leading to the roof. With one or some 90-degree turns, the corridor connects the Pishgah to the central courtyard. The common point about all these houses is that one can enter the courtyard only from the eastern or western side of that.
- **Main Ivan:** Muzaffarid Ivan is the oldest Iwan in Iranian traditional houses remaining firm and stable until now ([PERSON], [PERSON], 2013, p. 203). This Iwan, which is known as the most important and outstanding architectural element in Muzaffarid houses is taller than Iwans of the upcoming eras and it forms a long vertical rectangle. The height of the Iwan is between 7 to 9 meters in the investigated houses, which is 2.2 to 2.8 larger than the span.
\begin{table}
\begin{tabular}{c c c c c c c} & House name & Amin House & Shorkaai House & Pourrahimi House & Aboutable House \\ & Entrance &
The width of the Iwan is the same as the central courvard and the depth is almost the same as the length of later. The Iwan occupies almost the same area as the courvard. The construction of Badgir (wind catcher) has not been common in the Muzzaffarid era and these houses do not benefit from Badgir. The long narrow Iwan on the top of the small courvard acts as natural ventilation and transfers the wind into the courvard. However, in some houses, there are Badgir being built in the next eras. Iwan also acts as the divisonary space to access some of the areas.
- **Soffeh in front of Iwan and the Room behind it:** On the opposite side of the main Iwan, there is a small Soffeh accessing to which is possible through the door located on the Espara8 of the Soffeh. Behind this Soffeh, there is a long room perpendicular to the courvard. The height of this side of the house is at the same level as the western and eastern rooms.
- **The space behind the main Iwan:** In the investigated houses, the space behind the main Iwan is often a Tanabi or a Soffeh overlooking a garden on the north or south of the land. In some cases, there is no space behind the main Iwan. The entrance doorway to this area is located on the Espar of the Iwan.
- **The rooms adjacent to the main Iwan:** In the east and the west side of the main Iwan, there are two rooms, the accessibility to which is possible through the doorways located symmetrically on two side walls of the Iwan. These rooms are connected to the courvard through Iwan and space lighting is provided by the doorway openings. In houses where behind the main Iwan is a Soffeh overlooking the garden, lighting of the rooms adjacent to the Iwan is provided by the rooms adjacent to this Soffeh. On top of these rooms, there are 2 more rooms on the first floor, which are at the same height as the Iwan. These rooms have a structural role and act as flying buttress. They are mainly used as the food or goods depot and can only be accessed through the stars in the entrance corridor. The west and the east spaces of Iwan, which are symmetrical are the only two-story part of the house.
- **Yard:** All of the studied houses are arranged around a central private courvard where family activities occurred. In the whole area of the house, a small area is allocated to the central courvard, which is almost 3 to 6 percent ([PERSON], 2007, p. 170). The central courvard acts as the heart of the traditional dwelling and connects all spaces including closed, open, and semi-open to each other. None of the houses has a pond or flowed in the central courtyard. The whole spaces of the building can be accessed by a 20-centimeter stair upper the central courvard. The main concept behind designing the central courvard house was to generate an inward-looking plan with plain external walls, which were designed to discourage strangers from looking inside the house as well as to protect the house from the harsh climate of the region ([PERSON] et al., 2006; [PERSON], 2006).
In some houses of this era, in addition to the central courvard, there is a garden behind the main Iwan playing an important role in ventilating the house. In all these houses, the garden has palm trees with non-original ponds. This yard is located along the Iwan, the courvard and the Soffeh and has emphasized the north-south axis of houses in Ardakan.
### Construction Method (Material and technique)
This section presents a review of the construction systems and materials (Table 4). The materials used in Ardakan's Muzzfarid houses are totally in harmony with the hot and arid climate. All windows and doors have been built of wood. The main building material of all houses is adobe and they are constructed based on load-bearing walls. In addition to the load-bearing role of thick walls, the width of the walls acts as a thermal mass, getting solar energy during the day and giving it back during the night to create a balance in temperature. In the architecture of Muzzfarid houses in Ardakan, vault structural systems have been common. Arched ceilings, in a variety of shapes (vaults, arches, Tavizeth?), have often been used, which were rarely decorated with the patterns. Also, Karbandy10 was used in transferring the area of a dome in one house (Aboutable House). The flat roof has been used sometimes in some parts such as in a barn (the room adjacent to the Iwan on the first floor, [PERSON] and [PERSON] house).
Footnote 10: The bearing strips of the arched ceilings to transfer the compressive loads to the side walls.
Footnote 10: Or Ribed Vault consisting of arches with geometric rules under the original cover intersect.
### Decorative Features
In the Muzzfarid era, housing of which were highly decorated was flourished by the well-known citizens. Muzzfarid houses in Yazd have been decorated in different way, however, mud decorations have been of high importance in the houses. Some examples are mud wall sculptures and mud muugarnas12 and shamseh13. Mud decorations have special delicacy and elegance, so they are just used in rare cases. The abundance of these decorations has not been the same in Yazd
\begin{table}
\begin{tabular}{c c} \hline \hline \multicolumn{1}{c}{[[https://doi.org/10.5194/sports-archives-XLV/M-1-2020-945-2020](https://doi.org/10.5194/sports-archives-XLV/M-1-2020-945-2020)]([https://doi.org/10.5194/sports-archives-XLV/M-1-2020-945-2020](https://doi.org/10.5194/sports-archives-XLV/M-1-2020-945-2020))} \& Authors 2020. CC BY 4.0 License. 950 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Construction Method of 4 case studies of Muzzfarid Houses. Reference: Author.
province and no evidence of the mud decorations has been found in Muzaffarid houses of Ardakan County. Different methods have been utilized to decorate Muzaffarid houses in Ardakan, Ardakan, which can be observed in most of the recognized houses (Table 5). A group of these methods is simple and basic including decorative strip frames under the arch's springing line, the use of Kalli14 arches, and decorative Taghaman15 for vaults. The simplicity of implementation, which was possible with low cost and accessible tools, can be mentioned as the reason for prevalence of these methods. The other group is not as common as the first, although it is seen in some cases. Among the investigated cases, using gypsum decorations (such as lattice windows and decorative frames), wooden lattice windows, and karbandi can be placed in this category.
Footnote 14: A low-height Iranian arch, a combination of Muzchdar and Tiezhdar arch.
Footnote 15: False arch, having the appearance of an arch though not of arch construction.
### Building type classification
According to the analysis of the Muzaffarid houses in Ardakan, it can be seen that the architecture and organization of spaces in these houses follow a general pattern. All of these houses consist of spaces including Pishgah and entrance corridor, a main Iwan, Soffeh in front of the Iwan, rooms on the eastern and western part, and a central courtyard. Also, historical architectural evidences show that the triple combination of the main Iwan, the central courtyard and the Soffeh in front of the Iwan with the north-south axis is repeated in all Muzaffarid houses of Ardakan. Other spaces of the house have been formed surrounding them. The common architectural pattern of these houses is introvert due to the central courtyard. Some houses have a garden too, which is on the back of the main Iwan; thus, they have a semi introvert-semi extwrovert pattern. Other general characteristics of this houses can be extended to the elongation of the building in Room Raste direction, a mass of building in four geographical directions, the two-story part on the west and the east sides of the Iwan, a courtyard level close to the public passage level, multiple 90 degrees turns in the entrance corridor for security and privacy, a rectangular courtyard and rooms, the use of three types of open, semi-open and closed spaces, the installation of windows and openings facing the central courtyard, not using of pond and plants in the courtyard and also the use of Tiezhdar16, and in some cases, Mazehdar17 arches. In addition, the use of local materials and conformity of architecture and structure with climate are also significant in the houses of this period.
Footnote 16: An Iranian Pattern, Special technique in the material arrangement.
As the typology defines the most fundamental differences, the spatial types of the studied houses can be distinguished based on the location of the Iwan in the northern or southern side of the central courtyard and presence or absence of a garden behind the Iwan. Therefore, Muzaffarids houses in Ardakan can be classified into two main types each of which can be divided into two subcategories according to the location of the Iwan (Table 6).
**The first type** has both a central courtyard and a garden as well as a Soffeh behind the Iwan which faces the garden. This type benefits a Soffeh with natural ventilation due to the extensive vegetation in the garden, rooms in the western and eastern parts of the Soffeh overlooking the garden with natural lighting and thus better spatial quality in the adjacent garden spaces. This type involves two subcategories: in the first subcategory, the Iwan is in the south of the central courtyard, the garden in the south of the land, and the Soffeh - which is facing the garden- is in the south of the Iwan and the north of the garden. In the second subcategory, Iwan is located in the north of the central courtyard, the garden is in the north of the land and the Soffeh is in the north of the Iwan and the south of the garden. In the **Second Type**, there is no garden and a Tanabi room or an ordinary room is often located behind the Iwan. However, sometimes there is no room. This type has a very compact plan with minimal natural lighting and natural ventilation through the small central courtyard.
Table 6 summarizes the research findings. The first and second rows indicate the position of the main Iwan and the garden as fundamental differences. The other rows are spaces present in all samples, but due to different positioning, they have caused changes in the spatial organization of the plans.
## 4 Conclusion
In this study, it is shown that the Muzaffarid houses in Ardakan have distinct and identifiable patterns that distinguish them from those of other historical eras. Main Iwan is the fundamental space of these houses that is prominent in all patterns. According to the location of the Iwan and the garden, Muzaffarid houses in Ardakan are classified into two types. Due to the existence of the garden, the first type has a larger area and a better spatial quality as well as more decorations and sometimes more varied construction techniques. These houses probably belonged to well-off families with better financial status and higher social-economic backgrounds in the Muzaffarid era. The second type has less area and less spatial
\begin{table}
\begin{tabular}{c c} \hline & Decorative element & Abundance \\ \hline & & [PERSON], \\ & & [PERSON], \\ & & [PERSON] \\ & & Houses \\ \hline & & [PERSON], \\ & & [PERSON] \\ & & Houses \\ \hline & & [PERSON], \\ & & [PERSON] \\ & & Houses \\ \hline & & [PERSON] \\ & & Houses \\ \hline & & [PERSON] \\ & & House \\ \hline & & [PERSON], \\ & & [PERSON] \\ & & Houses \\ \hline & & [PERSON] \\ & & House \\ \hline & & [PERSON], \\ & & [PERSON] \\ & & Houses \\ \hline & & [PERSON] \\ & & House \\ \hline & & [PERSON], \\ & & [PERSON] \\ & & Houses \\ \hline & & \\ \hline & & \\ \hline & & \\ \end{tabular}
\end{table}
Table 5: Decorative Features of 4 case studies of Muzaffarid Houses. Reference: Author.
quality and diversity than the first type. This type consists of fewer open and semi-open spaces, so it uses less natural ventilation and sunlight. More limited decorations and construction techniques are also observed. The analysis of the Muzzaffarid houses within the city of Ardakan conveys that not only the climatic factors but also cultural-social values have defined the housing typology or the spatial organization of the studied house. Thus, the housing evolution represents a collective development reflecting both the cultural needs as well as the various environmental constraints. These traditional houses represent a spontaneous model that refers to a humble experience of local skills and the limitation of the available local construction materials. Nevertheless, it is widely acknowledged as a distinctive example of a housing development that perfectly confronts the harsh desert climate and responds adequately to the basic needs of its users.
## References
* [PERSON] (1938) [PERSON], 1938. Tarik-e _Yazd yd Jaakada-ye Yazdlan_, Yazd, Iran.
* [PERSON] et al. (2006) [PERSON], [PERSON], [PERSON], 2006. _Courtyard housing: past, present and future_. Taylor & Francis Group, New York.
* [PERSON] et al. (1968) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 1968. _The history of Iran_. Cambridge, UK: Cambridge University Press. ISBN 9789004127562.
* [PERSON] (1966) [PERSON], 1966. The New History of Yazd. By the efforts of [PERSON]. Tehran: Iran Culture.
* [PERSON] (2007) [PERSON], 2007. _Power, politics and religion in Timurid Iran_. Cambridge, UK: Cambridge University Press. ISBN 9780521865470.
* [PERSON] (1993) [PERSON], 1993. _Yazd from the rise to the fall of Muzzaffaris_. Master thesis. History field. Faculty of Literature and Humanities, University of Tehran.
* New World Encyclopedia contributors. (2009). \"Muzzaffarids,\" New Encyclopedia. (www.newworldencyolopedia.org/index.php?ti=Me-Muzzaffarids&oldidid=915345 (accessed January 24, 2020).
* [PERSON] (1996) [PERSON], 1996. _Dictionary of Islamic architecture_. London, UK: Routledge. ISBN 9780415060844.
* [PERSON] (1999) [PERSON], 1999. _Historical Processes of the Building Landscape, Architectural Knowledge and Cultural Diversity_, ed. [PERSON], Comportments, Lausanne, Switzerland; 39-50.
* [PERSON] (2006) [PERSON], 2006. A typological perspective: the impact of cultural paradigmatic shifts on the evolution of courtyard houses in Cairo. _METUM J Fac Archit_, 23(1):41-58.
* [PERSON] et al. (2013) [PERSON], [PERSON], [PERSON], [PERSON], 2013. Building typologies identification to support risk mitigation at the urban scale Case study of the old city centre of Seixal, Portugal. _Journal of Cultural Heritage_, 14: 449-463.
* [PERSON] (1976) [PERSON] (1976). Housing by people. [PERSON], London.
* [PERSON] (1955) [PERSON]. (1955). The Architecture of Islamic Iran: The Il Khindi Period, Princeton.
* [PERSON] (2014) [PERSON], \"MOZAFARIDS\", Encyclopedia Iranica, online edition, 2014, available at [[http://www.niancaonline.org/articles/](http://www.niancaonline.org/articles/)]([http://www.niancaonline.org/articles/](http://www.niancaonline.org/articles/)) mozaffarids (accessed on January 24, 2020).
* [PERSON] et al. (2007) [PERSON], [PERSON], 2007. Muzzaffarid houses of Meybod. From the book of \"A city there is in Meybod\". By [PERSON]. Tehran: Cultural Heritage, Handicrafts and Tourism Organization of Iran, Meybod Cultural Heritage Research Institute.
* [PERSON] et al. (2013) [PERSON], [PERSON], 2013. Investigating the evolution of Iwan in traditional houses of Yazd-Ardakan plain from Muzzaffarid to Qajar era. _Soffeh Journal_. No. 62.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \multicolumn{1}{|c|}{Iwan/Location} & \multicolumn{1}{c|}{South of Central} & \multicolumn{1}{c|}{South of Central} & \multicolumn{1}{c|}{South of Central} \\ & \multicolumn{1}{c|}{Courtyard} & \multicolumn{1}{c|}{Courtyard} & \multicolumn{1}{c|}{Courtyard} & \multicolumn{1}{c|}{Courtyard} \\ \hline Garden Yard Location & South of House & North of House & \multicolumn{1}{c|}{ } & \multicolumn{1}{c|}{ } \\ Entrance Area / Location & North of House & South of House & North of House & \multicolumn{1}{c|}{ } & \multicolumn{1}{c|}{ } \\ \cline{2-5} \cline{5-5}
|
isprs
|
TYPOLOGY OF HISTORICAL HOUSES IN MUZAFFARID ERA: CASE STUDY OF ARDAKAN CITY, YAZD, IRAN
|
M. Dormohamadi
|
https://doi.org/10.5194/isprs-archives-xliv-m-1-2020-945-2020
| 2,020
|
CC-BY
|
isprs/c65e44b6_9e26_4cd0_b740_28267121d729.md
|
Exposing and Providing Access to Indian Bioresource Information Network (IBIN) Species Occurrence Dataset as Web Service using OGC WPS Standard
[PERSON], [PERSON], [PERSON]
Indian Institute of Remote Sensing, ISRO, Dehradun, India - (kapil, sameer)@iirs.gov.in2 Birla Institute of Technology and Science, Pilani, India - [EMAIL_ADDRESS]
###### Abstract
Species occurrence data are collected by many researchers worldwide as record of species present at a specific time at some defined place as part of biological field investigation serving as primary or secondary dataset. These datasets reside in separate silos across numerous distributed systems having different formats limiting its usage to full potential. IBIN portal provides a single window for accessing myriad spatial/non-spatial data on bioseources of the country. To promote reuse of occurrence dataset among organizations in an interoperable format including support for integration across various platforms & programming languages, it is been exposed as web service using OGC Web Processing Service (WPS) standard. WPS provides standardized interface for performing online geo-processing by exposing spatial processes, algorithms and calculations thereby enabling machine to machine communication and wider usage in various scenarios (e.g. service chaining etc.). Open source ZOO-project is used for developing the 'Species Search' WPS service. WPS takes inputs as either the species name or bounding box or shapefile defining the area of interest and returns queryable OGC complaint Web Map Service (WMS) as output with specie(s) occurrences represented in grid (Skm x 5 km) format, with each grid possessing attributes like specie(s) name, family, state, medicinal detail etc. WPS process can be invoked asynchronously, enabling proper feedback regarding status of the job submitted. JavaScript based web client for consuming this service has also been developed along with custom QGIS plugin to allow potential users to access the same in GIS software for wider reusibility.
2018 IEEE International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XIII-S5, 2018 ISPRS TC V Mid-term Symposium 'Geospatial Technology - Pusel to People', 20-23 November 2018, Dehradun, India
## 1 Introduction
Species occurrence data has been collected for a long time only as physical specimens and stored in museums as natural history. Such data is collected by many researchers as it finds many applications in various fields like biogeographical studies, conservation planning, bioprospecting ([PERSON], 2005), species distribution prediction ([PERSON] et al., 2006), estimating magnitudes of animal movements ([PERSON] et al., 2018) etc. In recent times, however, museums and other agencies have spent considerable amounts to support the digitization of such data into online species occurrence databases ([PERSON] et al., 2017).
These databases are managed by different bodies, meaning that they reside in various distributed networks, and each such database has a different format for storage and retrieval of data. Further, the data collected is usually documented and organised in a extremely inconsistent and fragmented approach ([PERSON] et al., 2013). This creates a problem as separate procedures are required to gather the same data from different databases, thereby limiting the use of datasets from multiple databases to its full potential. The Indian Bioresource Information Network (IBIN) serves as a portal which networks the otherwise independent databases into a unified delivery system ([[http://ibin.gov.in/index.php/?option=com.jbin&task=](http://ibin.gov.in/index.php/?option=com.jbin&task=)]([http://ibin.gov.in/index.php/?option=com.jbin&task=](http://ibin.gov.in/index.php/?option=com.jbin&task=)) about). IBIN portal provides a single window for accessing myriad spatial/non-spatial data on bioresources of the country. This setup allows the data to be made available to a range of end users at a single end-point thereby ensuring that the data is always available in a consistent format thereby making it simple to consume.
To promote the reuse of IBIN species occurrence dataset among organizations in an interoperable format including support for integration across various platforms & programming languages, it is been exposed as web service using OGC Web Processing Service (WPS) standard. The OGC WPS provides a standardized interface for performing either simple or complex geoprocessing operation/computation online via web service from the remote host ([PERSON], 2015; [PERSON], 2007). As a result, reusability of the data in an interoperable manner is achieved, which is also platform-independent and can be consumed by multiple programming languages. This also provides the power to chain simple processes to allow for the execution of various complex processes in a variety of different contexts.
'Species Occurrence Search' WPS service takes inputs as either the species name or bounding box or shapefile defining the area of interest and returns queryable OGC complaint Web Map Service (WMS) as output with specie(s) occurrences represented in grid (Skm x 5 km) format, with each grid possessing attributes like specie(s) name, family, state, medicinal detail etc. In the following section, the reader will find the overall setup of this WPS architecture, its important features, the design and implementation of the 'Species Occurrence Search WPS' and of the JavaScript based web client and QGIS plugin which consume this WPS and use the WMS output to display results.
## 2 Web Processing Service
A Web Processing Service (WPS) is a standardized interface defined by the Open Geospatial Consortium (OGC). It is a web service which makes it possible to execute computing processes and retrieve metadata which describe their purpose and functionality. The capabilities of a WPS can be retrieved using a GetCapabilities request, details of a specific service can be obtained using a DescribeProcess request while the processes can be executed using an Execute request ([PERSON], 2015; [PERSON], 2007). Since the release of version 2.0.0, job control and monitoring operations like GetStatus, GetResult and Dismiss have also been added which are particularly useful during an asynchronous execution.
### Overall Architecture
The WPS standard forms the heart of this project. The open source ZOO-project was used to develop the 'Species Occurrence Search' WPS service.
Figure 1 shows the overall architecture of the Species Search WPS. It allows searching for species occurrence using three ways either using species name or bounding box input or using a shapefile denoting the area of interest. WPS service is not just one service, this is actually a simplified representation for three services, one of which will be chained with WPS based on the inputs. WPS service processes the inputs using which it queries the IBIN database. Once it receives the response form the database, it converts the received response into a format that is accepted by MapServer ([[https://mapserver.org/index.html](https://mapserver.org/index.html)]([https://mapserver.org/index.html](https://mapserver.org/index.html))).
ZOO-project enables to write the WPS processes in languages like Python, PHP, Java, C# and JavaScript. Here the species search WPS service is written in Python using various Python geospatial libraries like GDAL, OWSLib etc. ZOO-project also provides a capability to integrate MapServer support ([[http://www.zoo-project.org/](http://www.zoo-project.org/)]([http://www.zoo-project.org/](http://www.zoo-project.org/))). This happens in such a way that once a WPS using MapServer support is terminated, its outputs are passed to MapServer. Once MapServer gives the WMS output, that output is received by WPS species search service which makes some necessary changes to the WMS so as to enable handling GetFeaturInfo requests. This enables user to get the features associated with each grid (species search result) of the output fetched from IBIN species occurrence database. Finally, the WMS output is returned to the client which can then be used by the client to visualize the output.
### Important Features
#### 2.2.1 Interoperability
A WPS allows the processes and code to be delivered to organizations irrespective of underlying program ([PERSON] et al., 2011). This ensures that the functionality can be used by organizations in a platform-independent manner while also giving the managing body to make necessary changes and updates to the code without breaking the functionality for any of the organizations.
#### 2.2.2 Reusability
Services exposed as a WPS can be reused by organizations in multiple applications ([PERSON] et al., 2011). This means that the same functionality can be incorporated into multiple applications without having to explicitly design that functionally for each application separately, simply by importing the WPS into the application.
#### 2.2.3 Service Chaining
This is a workflow of services where for each pair of services, the second service can occur only after the first one is terminated ([PERSON] et al., 2009). This allows the creation of repeatable workflows and chunking of complex tasks into simpler blocks, each handled by a different service. Existing geospatial services like WMS or another WPS can also be incorporated into such a service chain ([PERSON] et al., 2009).
#### 2.2.4 Asynchronous execution
Processing of geospatial data often takes a long time. This can often exceed the maximum connection timeout range of Hyper Text Transfer Protocol (HTTP) servers, which is what WPS relies on ([PERSON], 2008). Therefore, it is desirable to have an asynchronous execution of the same, as it decouples the request from the response and consequently avoids wasting and draining client resources till the processing goes on at the server end. ([PERSON] and [PERSON], 2015).
Figure 1: The overall architecture of the Species Search WPS
Figure 2: Asynchronous execution sequence diagram of the WPS processrequest to fetch the result of execution from the WPS server. Figure 2 shows an asynchronous execution sequence as applicable for the Species Search WPS. The client would make an Execute request, passing the required input as either the name of a species or a bounding box or a shapefile defining the area of interest, and would be notified of a JobID for the process which has thus been initiated. The client would then constantly ping the server with GetStatus requests passing the JobID and would be notified of the status of execution as well as the percentage of completion of execution when the process is running. Finally, once the client is notified that the execution is completed, the client would retrieve the results by making a GetResult request passing the JobID, for which the WPS server would return the WMS output containing the results showing the location of species occurrence with attributes as per the input data provided.
### Choice of WPS Framework
ZOO-project was selected as the framework for the Species Search WPS. A major driving factor was the support of multiple languages in Zoo, which is not provided by other frameworks like PyWPS ([[http://pywps.org/](http://pywps.org/)]([http://pywps.org/](http://pywps.org/))) which supports Python, 52\({}^{\circ}\) North ([http://52 month.org/communities/geoprocessing/wps/](http://52 month.org/communities/geoprocessing/wps/)) which supports only Java, or GeoServer WPS ([[http://docs.goeserver.org/stable/en/user/services/wps/index.htm](http://docs.goeserver.org/stable/en/user/services/wps/index.htm)]([http://docs.goeserver.org/stable/en/user/services/wps/index.htm](http://docs.goeserver.org/stable/en/user/services/wps/index.htm)) 1) which again only supports Java. This gives flexibility to the maintaining organization to develop and publish other services in different languages as preferred by the developers. The performance of ZOO-Project is acceptable considering the tested response times, failure rates and throughput with concurrent requests. Further, among PyWPS and ZOO, frameworks which support Python, the performance of ZOO is reported to be better in all three metrics and is known to have a better support community [15].
## 3 DESIGN and Implementation
### Adding Capabilities for Handling Different Inputs
The service has been designed to take either the name of the species, a bounding box or a shapefile describing the area of interest as an input. This means that the WPS should be able to handle all these three types of inputs and process them accordingly.
To provide this functionality, the service is made to accept two inputs instead. The first input parameter, called Service_Name, asks the user for the choice of service to be executed. This defines the type of input that the user will be providing to the service. The second parameter, called Input_Data, is the parameter which accepts the name of the species or the coordinates of a bounding box or the URL of a shapefile as an input. Figure 3 shows the complete structure of WPS chaining occurring in the Species Search WPS. The Species Occurrence Search WPS validates the inputs, if the inputs are invalid, an error message is returned, otherwise one of the services - Search by species name, search by bounding box or search by shapefile, is chained with the existing WPS by passing to that service the required inputs from Input_Data. Here 'Search by species name', 'Search by Bounding Box' and 'Search by Shapefile' are the three services that form WPS as described in the overall architecture.
### Querying the IBIN Database according to the Input
This part of the design uses selective chaining of processes. Based on the type of inputs passed, a separate process is executed which processes the inputs as required, makes the appropriate request to IBIN database, and receives the response from the database. This response contains data of all the locations where some species are found if the input was the name of a species, or all the species which were found within a bounding box or an area of interest and their locations. Further, available information about each species is also part of the response, like family, medicinal value etc. This response is in raw format which must be processed so that it can be returned to the client is a useful format.
### Generating WMS Output
For a client, it would be useful if the data was returned in a format that represented all the data graphically instead of raw data which would require processing by the client for finding useful details from the output. This is where generating a WMS output comes in. A WMS output would return to the client a raster layer which can be overlaid onto a map. The Species Search WPS returns the location of species as 5 km x 5 km grids. The data associated with each grid can be accessed by passing the coordinates of some point in the grid as parameters in a GetFeatureInfo request to the WMS server. This removes all spatial processing load from the client, except displaying the WMS layer, and gives the data in a graphical format.
To make the WMS output, ZOO-project provides support for integrating MapServer with the WPS. This integration makes it possible to pass some data to MapServer for the generation of a WMS output. This output is not directly capable of handling GetFeatureInfo requests. To add this capability, the WPS was configured to make the necessary changes before publishing the output to the client. This ensures that the client can always fetch all the associated data at any point by making the corresponding GetFeatureInfo request to the WMS server, which the client can identify using the URL of the WMS output received by the client.
### Developing Clients for Consuming the WPS
While the WPS provides the server-side functionalities which could be incorporated into multiple applications to suit the needs of organizations, it was imperative that some clients that can consume the WPS were developed. This was necessary so that and users who wish to gather the results could use the available clients, instead of manually making the request to the WPS and handling the WMS outputs. Consequently, a web client and a custom QGIS plugin were developed.
Figure 3: Species Occurrence Search WPS Service
#### 3.4.1 Web Client
Zoo-project provides boilerplate JavaScript code which is capable of handling WPS operations, both synchronous and asynchronous. The web client is built using this boilerplate code as a base (figure 4). It has been developed to make all requests asynchronously, and the user is notified of the progress via a progress bar which is updated with the response of each GetStatus request that is made. Leaflet ([[https://leaflejs.com/](https://leaflejs.com/)]([https://leaflejs.com/](https://leaflejs.com/))), an open-source JavaScript library for interactive maps has been used to render maps and the WMS output (showing the location of species occurence in grid format). Further client has functionality to make GetFeatureInfo requests whenever someone clicked on the WMS output layer. The results (showing attribute of species found) are then shown as a popup as shown in figure 5.
#### 3.4.2 Custom QGIS Plugin
The QGIS plugin for QGIS 2.18 was also developed as software like QGIS are commonly used for interpreting spatial data (figure 6).
The plugin uses OWSLib ([[https://github.com/geopython/OWSLib](https://github.com/geopython/OWSLib)]([https://github.com/geopython/OWSLib](https://github.com/geopython/OWSLib))) at its core, to make it capable of handling asynchronous execution. Once a request is made, the user is notified of a running process by a progress bar being displayed as a message. Upon successful completion of WPS process, the WMS layer denoting the output is added to the workspace as a layer, whose name the user is prompted to supply before the layer is added. The details of species associated with each grid can be seen by using the 'Identify Features' tool (figure 7).
## 4 Conclusion
The WPS for species occurrence search provides a way to access information of occurrence data of all the species of the country through one unified place. This provides data in a consistent manner to all users, thus eliminating issues of different format of outputs from different databases. Further, the data is supplied in a reusable and interoperable way. This ensures that the service can cater to the needs of the maximum number of users by surpassing any restrictions that may be imposed by platform, therefore, supporting extensively in further studies relating to species occurrence.
## References
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2017. Use of Online Species Occurrence Databases in Published Research since 2010, in: _Proceedings of TDIWG_. p. e20518. [[https://doi.org/10.3897/dwgproceedings.1.20518](https://doi.org/10.3897/dwgproceedings.1.20518)]([https://doi.org/10.3897/dwgproceedings.1.20518](https://doi.org/10.3897/dwgproceedings.1.20518))
* [PERSON] (2015) [PERSON], 2015. OGC WPS 2.0.2 Interface Standard: Corrigendum 2. _Open Geospatial Consortium_. [[https://doi.org/http://www.opengeospatial.org/](https://doi.org/http://www.opengeospatial.org/)]([https://doi.org/http://www.opengeospatial.org/](https://doi.org/http://www.opengeospatial.org/))
* [PERSON] (2008) [PERSON], 2008. Oge Web Processing Service and It's Usage. _GIS Ostraux_ 2008 27, 1-12. [[https://doi.org/10.1007/springerreference.62558](https://doi.org/10.1007/springerreference.62558)]([https://doi.org/10.1007/springerreference.62558](https://doi.org/10.1007/springerreference.62558))
* [PERSON] (2005) [PERSON], 2005. Uses of primary species-occurrence data, version 1.0 _Report for the Global Biodiversity Information Facility_
* [PERSON] et al. (2013) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2013. eHealth, a multi-purpose Web Processing Service for ecological modeling. _Environmental Modelling and Software_[[https://doi.org/10.1016/j.envsoft.2012.11.005](https://doi.org/10.1016/j.envsoft.2012.11.005)]([https://doi.org/10.1016/j.envsoft.2012.11.005](https://doi.org/10.1016/j.envsoft.2012.11.005))
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON]. [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON],
Figure 4: WPS Web Client- Species Occurrence Search by Bounding Box
Figure 5: WPS Web Client- Output of Species Occurrence Search using user defined Bounding Box
Figure 6: QGIS Plugin for IBIN Species Occurrence WPS Service
Figure 7: Executing IBIN Species Search WPS Service using Custom QGIS PluginK., [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2006. Novel methods improve prediction of species' distributions from occurrence data. _Ecography_ (Cop.). [[https://doi.org/10.1111/j.2006.0906-7590.04596.x](https://doi.org/10.1111/j.2006.0906-7590.04596.x)]([https://doi.org/10.1111/j.2006.0906-7590.04596.x](https://doi.org/10.1111/j.2006.0906-7590.04596.x))
* [PERSON] et al. (2009) [PERSON], [PERSON], [PERSON], [PERSON], 2009. Geospatial Services Chaining with Web Processing Service, in: _International Symposium on Intelligent Information Systems and Applications (IISA'09)_.
* [PERSON] and [PERSON] (2015) [PERSON], [PERSON], [PERSON], 2015. Evaluation of Web Processing Service Frameworks. _OSGeo J._ 14, 29-42.
* [PERSON] (2007) [PERSON], 2007. OpenGIS @ Web Processing Service. _Open Geospatial Consortium_. [[https://doi.org/citeulike-article-id:8653309](https://doi.org/citeulike-article-id:8653309)]([https://doi.org/citeulike-article-id:8653309](https://doi.org/citeulike-article-id:8653309))
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2018. Species occurrence data reflect the magnitude of animal movements better than the proximity of animal space use: _Ecosphere_. [[https://doi.org/10.1002/ecs2.2112](https://doi.org/10.1002/ecs2.2112)]([https://doi.org/10.1002/ecs2.2112](https://doi.org/10.1002/ecs2.2112))
* [PERSON] et al. (2011) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2011. Data processing using Web Processing Service orchestration within a Spatial Data Infrastructure, in: _Proceedings of the 34 th International Symposium on Remote Sensing of Environment_.
* [PERSON] and [PERSON] (2015) [PERSON], [PERSON], 2015. Asynchronous Geospatial Processing: An Event-Driven Push-Based Architecture for the OGC Web Processing Service. _Transactions in GIS_[[https://doi.org/10.1111/tgs.12104](https://doi.org/10.1111/tgs.12104)]([https://doi.org/10.1111/tgs.12104](https://doi.org/10.1111/tgs.12104))
|
isprs
|
EXPOSING AND PROVIDING ACCESS TO INDIAN BIORESOURCE INFORMATION NETWORK (IBIN) SPECIES OCCURRENCE DATASET AS WEB SERVICE USING OGC WPS STANDARD
|
K. Oberai, M. Jasoria, S. Saran
|
https://doi.org/10.5194/isprs-archives-xlii-5-781-2018
| 2,018
|
CC-BY
|
isprs/400e3111_bc9b_4244_a5e5_6719557f24ce.md
|
Object-Based and Supervised Detection of Potholes and Cracks from the Pavement Images Acquired by UAV
[PERSON]
1 Institute of Remote Sensing and Geographic Information System, Peking University, 5 Summer Palace Road, Beijing 100871, China - [EMAIL_ADDRESS]
[PERSON]
1 School of Computer Science, Shihezi Univeristy, Shihezi, Xinjiang 832002, China - [EMAIL_ADDRESS]
[PERSON]
1 Institute of Remote Sensing and Geographic Information System, Peking University, 5 Summer Palace Road, Beijing 100871, China - [EMAIL_ADDRESS]
[PERSON]
1 Institute of Remote Sensing and Geographic Information System, Peking University, 5 Summer Palace Road, Beijing 100871, China - [EMAIL_ADDRESS]
utilizes radar pulses to image the subsurface profile to detect subsurface objects, changes in material properties, voids and cracks, which is very convenient and accurate([PERSON] & [PERSON], 2007).
However, some limited abilities and issues occurred in the previous studies. For instance, most of studies just only focused on one kind of distress, such as cracks or potholes, whereas more than one type of damages could exist on the pavement at the same time. The mobile vehicle integrated with PMS also has a potential risk for the traffic safety and is unable to cover the full pavement of different lanes simultaneously. Given these above problems, the pavement images acquired by Unmanned Aerial Vehicle (UAV) were used to implement the study, and four supervised learning algorithms including K-Nearest Neighbour (KNN), Support Vector Machine (SVM), Artificial Neural Network, and Random Forest (RF) were evaluated in terms of the performance on the detection of potholes and cracks from the UAV images of road pavement.
## 2 Data and Methods
### Image Acquisition and Segmentation
The asphalt pavement located in rural area of Shihezi City, Xinjiang/China was selected in the study. According to the field investigation, the majority of the pavement was in poor condition with a variety of severe pavement distresses, such as potholes and cracks. A multispectral camera Micro-Miniature Multiple Camera Array System (MCA), designed by Tetracam Inc. USA, was mounted on a fixed-wing UAV to capture the pavement images. Theoretically, MCA configures six bands spanning from blue to near infrared, i.e. Blue, Green, Red and three near infrared channels ([PERSON] & [PERSON], 2012). However, the images captured by the three infrared channels do not have sufficient exposure, which results in a lower contrast between the non-distressed and distressed pavement. Therefore, only the images in RGB channels were chosen in this study. The UAV flew along the road at 30 meters above the ground level, in which case one pixel corresponded to about 13.54*13.54 mm area in the pavement. In total, 126 pavement images were acquired with 70% of overlap between two sequential images. However, there is no white traffic line in those above pavement images, which is also one of the common objects on the road surface. In order to increase the generalization of this study, a sample UAV pavement image provided by Arisight Company ([[https://demo.airsight.de/uav/index_en.html](https://demo.airsight.de/uav/index_en.html)]([https://demo.airsight.de/uav/index_en.html](https://demo.airsight.de/uav/index_en.html))) was used to extract the white traffic lines. This pavement image also has three RGB channels and was captured by a digital camera with a higher resolution (1 pixel = 5 mm).
Given the high resolution of pavement images, Multiresolution Segmentation (MS) algorithm integrated in eCognition Developer Software 9.0 was used to extract the objects of potholes and cracks from pavement images. MS identifies single image objects of one pixel in size and merges them with their neighbours based on relative homogeneity criteria. This homogeneity criterion is a combination of spectral and shape criteria, which is calculated through a comprehensive scale parameter. Higher values for the scale parameter result in larger image objects, smaller values in smaller ones([PERSON] al., 2003). However, it is difficult to choose one appropriate scale parameter to extract intact potholes and cracks simultaneously. The contrast, one texture feature calculated based on the Gray-level Co-occurrence Matrix (GLCM)([PERSON] et al., 2008), was selected to measure the variations within the distress and non-distressed areas in the study. The formula for calculation of contrast feature is:
\[Contrast=\sum_{i=0}^{N-1}\sum_{j=0}^{N-1}P_{i,j}(i-j)^{2} \tag{1}\]
Where, \(i\), \(j\) are the row and column number of GLCM respectively. \(P(i,j)\) is the value in the cell \(i,j\). \(N\) is the number of rows or columns. In order to obtain an intact pothole object, one merge action was conducted based on the contrast values of objects over the initial segmentation resulting from the lower scale parameter. Namely, all image objects, the contrast values of which exceed the given threshold, will be merged into one image object.
### Dataset Preparation and Feature Selection
Sufficient sample data are necessary for training and validating machine-learning algorithms ([PERSON] et al., 2016). Three classes were defined in this study, i.e. pothole, crack and non-distressed pavement that includes damage-free pavement, with white and yellow traffic lines. However, there are limited numbers of potholes and cracks on the pavement we studied. In comparison of two sequential images, it can be observed that the pixel values in the same location has a bias because of the illumination differences caused by the different solar incident angle. Consequently, this will lead to some degree of difference between the segmentation results of the same target derived from different images. Hence, dataset preparation will be implemented based on three rules: (a) 126 pavement images are segmented individually following the procedure mentioned in section 2.1; (b) the same target in two of sequential images are thought to be of two different objects; (c) white traffic line samples were collected from the image provided by Arisight Company. Finally, 1430 samples containing 221 potholes, 678 cracks and 531 non-distressed pavements with 299 damage-free pavements, 122 yellow and 110 white traffic lines respectively were collected.
Feature selection has a great influence on the performance of learning algorithms. Reasonable numbers and types of features are able to increase the accuracy of algorithm while decreasing the computation time([PERSON] & [PERSON], 2009). Generally, three types of image features can be extracted from digital images, i.e. spectral feature, geometry feature and texture feature. In this study, based on the prior knowledge of feature value distribution of every kind of image objects, 18 features containing 6 spectral features, 6 geometry ones and 6 GLCM texture ones were introduced to train and validate the learning algorithms (Table1). Furthermore, considering the different value distribution of each feature, feature normalization was implemented based on the equation (2) below.
\[\text{X}_{Norm}=\frac{X-\text{X}_{\text{min}}}{x_{\text{max}}-\text{X}_{\text{ min}}} \tag{2}\]
Where \(\text{X}_{Norm}\) is the normalized feature vector. \(X_{\text{max}}\), \(X_{\text{min}}\) are the maximum and minimum values of feature X respectively. Consequently, values of all features are in the same range from 0 to 1, which should speed up the convergence efficiency of learning algorithms. In order to verify the capabilities of each type of feature towards the detection of potholes and cracks, six combinations of three types of features were introduced to each classification algorithm, i.e. spectral(C1); geometry(C2); texture(C3) features; spectral and geometry features(C4); geometry and texture features(C5); spectral, geometry and texture features(C6).
### Detectors of Potholes and Cracks
Four supervised classifiers including K-Nearest Neighbours (KNN), Support Vector Machine (SVM), Artificial Neural Network (ANN) and Random Forest were selected to detect the potholes and cracks in this study. In order to examine the predictive accuracy of learning algorithms, and to protect against overfitting, the 1430 samples were randomly divided into 5 folds. For each fold, a model is trained using the out-of-fold observations, and the classification accuracy of the model is calculated using in-fold data. Finally, the average classification accuracy over all folds is an indicator of the model performance. Exceptionally, the performance of Random Forest would be validated using the Out-of-Bag (OOB) Error ([PERSON], 2001) instead of the above n-fold validation procedure. All the algorithms are run on one PC configured with Core i7-6700 HQ CPU@ 2.6 GHZ, Nvidia Quadro M1000M GPU and 16 GB RAM. The running time of different models was also recorded as one of important indicator of the algorithm performance.
#### 2.3.1 K-Nearest Neighbours
K-Nearest Neighbours (KNN) is one type of instance-based and lazy learning algorithm, which determines the class of observation that represents the maximum of its neighbours([PERSON] & [PERSON], 2007). The parameter K determines the number of neighbours considered. The distance between the observation and samples could be defined by each of their Euclidean distances, Minkowski distance etc. Generally, the class of observation will be assigned directly based on the class of majority neighbours. However, KNN might bias the outcome when the number of nearest neighbours in one class is less than other relatively distant neighbours that belong to another class. Therefore, distance weighting is always introduced to refine the classification result of KNN. Namely, the nearer neighbours will contribute more to the outcome than the more distant ones. A common weighting scheme consists in giving each of K neighbours a weight of 1/d\({}^{2}\), where d is the distance of observation with respect to its neighbours. Among these parameters, the parameter K has a great impact on the accuracy of KNN. In this study, we present a series of K to verify how many of K would best towards this application. The Minkowski distance and weighting scheme of squared inverse of distance were selected for the experiment.
#### 2.3.2 Support Vector Machine
Support Vector Machine (SVM) is a classification system derived from statistical learning theory. It separates the classes with a decision surface that maximizes the margin between the classes. The surface is often called the optimal hyperplane, and the data points closest to the hyperplane are called support vectors. The support vectors are the critical elements of the training set. SVM is one of non-probabilistic binary classifiers to assign new examples to one category or the other. It means one SVM can only solve the two-class problems. SVM can also perform the multiclass problems by combining several binary SVM classifiers together based on the logic classification procedure of one-vs-one or one-vs-all. One special feature of SVM is the kernel function, which is introduced to deal with non-linear classification problems. The kernel function can map the original examples into a high-dimensional feature space, in which case the non-linear classification problem will become the linear case. There are several types of kernel model with different performance for different applications, such as linear kernel, polynomial kernel, Gaussian kernel etc. In the study, the performance of four types of kernel models on detection of potholes and cracks were evaluated, i.e. linear, quadratic, cubic and Gaussian.
\begin{table}
\begin{tabular}{c c c c c c} \hline \begin{tabular}{c} Categ \\ \(\text{ory}\) \\ \end{tabular} & Name & \
\begin{tabular}{c} Categor \\ y \\ \end{tabular} & Name &
\begin{tabular}{c} Categ \\ \(\text{ory}\) \\ \end{tabular} & Name \\ \hline \multicolumn{5}{c}{Mean} \\ \multicolumn{5}{c}{of} \\ \multicolumn{5}{c}{Red} \\ \multicolumn{5}{c}{Mean} \\ \multicolumn{5}{c}{of} \\ \multicolumn{5}{c}{Green} \\ \multicolumn{5}{c}{Mean} \\ \multicolumn{5}{c}{of} \\ \multicolumn{5}{c}{Green} \\ \multicolumn{5}{c}{Mean} \\ \multicolumn{5}{c}{of} \\ \multicolumn{5}{c}{Geen} \\ \multicolumn{5}{c}{Blue} \\ \multicolumn{5}{c}{Geen} \\ \multicolumn{5}{c}{\(\text{al}\)} \\ \multicolumn{5}{c}{STD} \\ \multicolumn{5}{c}{of} \\ \multicolumn{5}{c}{Red} \\ \multicolumn{5}{c}{StD} \\ \multicolumn{5}{c}{Green} \\ \multicolumn{5}{c}{Green} \\ \multicolumn{5}{c}{StD} \\ \multicolumn{5}{c}{Green} \\ \multicolumn{5}{c}{STD} \\ \multicolumn{5}{c}{StD} \\ \multicolumn{5}{c}{Green} \\ \multicolumn{5}{c}{StD} \\ \multicolumn{5}{c}{Blue} \\ \multicolumn{5}{c}{Blue} \\ \multicolumn{5}{c}{Blue} \\ \multicolumn{5}{c}{Blue} \\ \multicolumn{5}{c}{STD} \\ \multicolumn{5}{c}{Blue} \\ \multicolumn{5}{c}{STD: Standard Deviation} \\ \multicolumn{5}{c}{} \\ \multicolumn{5}{c}{} \\ \end{tabular}
\end{table}
Table 1 Selected Feature Set
#### 2.3.3 Artificial Neural Network
Artificial Neural Network mimic the way human brain solves problems with a large number of neurons([PERSON] & [PERSON], 2010). ANN is composed typically of three kinds of layers, i.e. the input layer, the hidden layer and the output layer. Every layer comprises a certain number of nodes similar to the neurons in the brain. The number of nodes in the input layer is determined by the number of features in the example data, while the number of output classes decides the number of nodes in the output layer. The number of hidden layers and associated nodes could vary for different applications. Moreover, every node corresponds to a kind of activation function which defines the output of that node given a set of inputs. Sigmoid, Softmax, Rectified Linear unit (ReLU) are commonly used in ANN. Which of them should be used depends on the objective of application. Back propagation is one widely used training procedure for ANN to adjust the weights and bias between the nodes. In this study, a three-layer feed-forward network with one input layer, one Sigmoid hidden layer and one Softmax output layer was constructed to classify the potholes and cracks. The network will be trained with the conjugate gradient method to minimize the difference between the output node activation and the output. In order to find out the appropriate number of nodes in the hidden layer for pavement distress detection, a series of numbers from 1 to 10 was evaluated based on the accuracy of classification result.
#### 2.3.4 Random Forest
Random Forest (RF) is one member of ensemble learning algorithms, which combine a certain number of decision tree classifiers together as a forest to predict the class of new examples ([PERSON], 2001). Every tree in the forest is trained with a subset training set, which is resampled from the original training dataset. The resampling is implemented with replacement and follows the bootstrap sampling procedure, i.e. the number of subset examples is the same as the original examples. In addition to the resampling of training examples for every tree, the features used to find the best split at each node of tree are resampled from the original feature set as well. The class of new examples is predicted by every tree in the forest, and is assigned based on a majority vote of them. The number of trees has a significant effect on the computation time of RF. As a result, a series of evaluation for what size of forest will perform best on pavement distress detection was conducted in this study.
## 3 Results and Discussion
Classification accuracy and computational time are selected as the two indicators of the performance of four learning algorithms. Classification accuracy is defined as the ratio of the number of successfully classified and total samples. Figure 1 illustrates the classification accuracy of KNN trained and validated using different settings of K and six groups of features. The accuracy of all models has a slight increase first and then decreases gradually while increasing K. It can be observed that the model trained with the combination (C6) of spectral features, geometric and textural features always performed best with the highest accuracy, while both the individual set of spectral or geometric features always presented almost similar performance with lower accuracy. Moreover, the figure presents that the individual textural feature set contributes more to the accuracy of KNN among the three types of features (Figure 1(a)). Figure 1(b) is the running time variation of different KNN models and indicates that the running time has no significant fluctuation over increasing the K for every feature combination. In general, the more features were used, the more time taken for KNN. The model with combination of C6 cost the most time while it can achieve the highest accuracy. Figure 1(c) shows the relationship between running time and classification accuracy of the best performance of each of six feature combinations. In order to make a compromise between the time and accuracy, K equals 4 and feature combination C5 containing the geometric and textural features were the best choice, which can result in an overall accuracy of 98.81% and 0.65s running time.(Table 2).
Figure 2 indicates the performance of SVM configured with different types of kernel functions and six feature combinations. Figure 2(a) shows that the SVM with linear kernel presented a lower classification accuracy when it is trained and validated only using either spectral features or geometry features individually. Along with introducing texture features or more types of features, the four kinds of SVM models (linear, quadratic, cubic, Gaussian) almost performed similarly on feature combination C3, C4, C5 and C6, and the highest accuracy was acquired by using three types of features together. Figure 2(b) indicates the running time by different SVM models. It can be seen that the SVM with polynomial kernels (Quadratic and Cubic) cost most time on the feature sets of C1, C2, C3. For C4, C5, and C6, all types of SVM models performed similarly on the running
Figure 3 presents the variation of classification accuracy and running time of ANN with respect to different numbers of neurons in the hidden layer. Specifically, when the number of hidden neurons was set to one, it means that only one abstract feature in hidden layer was used to classify the objects, which was not sufficient to distinguish between the pavement and distresses (cracks and potholes). Moreover, it took the most time to train and validate ANN in this case. With increasing the number of hidden neurons, the classification accuracy could benefit a lot from the more abstract features learned by ANN,
Figure 1: (a) The classification accuracy and (b) running time of KNN with respect to different K, and (c) the relationship between running time and accuracy of the best performance of each of six feature combinations
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline & Class & \multicolumn{3}{c}{Predicted Class} & Accuracy \\ & Crack & Pothole & Non-distressed & \\ \hline & Crack & 670 & 1 & 7 & 98.82\% \\ True Class & Pothole & 0 & 221 & 0 & 100\% \\ & Non-distressed & 5 & 2 & 524 & 98.68\% \\ Reliability & 99.26\% & 98.66\% & 98.68\% & 98.95\% (OA) \\ \hline OA: Overall Accuracy & & & & & \\ \end{tabular}
\end{table}
Table 3: Confusion Matrix of SVML-C6 and the running time decreased generally (Figure 3(b)). Figure 3 (a) shows that the ANN models with more than one type of features (C4, C5, and C6) and two more hidden neurons could always result in a higher accuracy. It also can be observed that when the number of hidden neurons was set over two, the classification accuracy did not change so much. Taking account of the running time as illustrated by Figure 3(c), the ANN with 12 hidden neurons and feature combination C4 was the best model to classify the pavement and distresses with the overall accuracy 98.81% and the corresponding running time was 0.35s. (Table 4).
Figure 4 shows the performance of RF with different number of trees in the forest. Obviously, the accuracy of RF maintained increasing along with the growth of quantity of trees until a flat trend. The feature combinations with one more type of features (C4, C5, C6) performed best and similarly when the number of trees in forest exceeded about eight. Figure 4(b) shows the running time of RF and demonstrates that the RF with feature combination C1 always cost most time compared with other feature combination. Moreover, there is a positive correlation between the trees and running time. As Figure 4(c) shows, the RF with 18 trees in the forest was the best model to detect the pavement and distresses when using the feature combination C4 (Table 4). The calculation time was only 0.09s.
\begin{table}
\begin{tabular}{c c c c c c} \hline & \multicolumn{3}{c}{Predicted Class} & Accuracy \\ & Class & Crack & Pothole & Non-distressed & \\ \hline \multirow{2}{*}{True Class} & Crack & 671 & 1 & 6 & 98.96\% \\ & Pothole & 0 & 220 & 1 & 99.54\% \\ \hline \end{tabular} This contribution has been peer-reviewed.
[[https://doi.org/10.5194/sprs-archives-XLI-4-W4-209-2017](https://doi.org/10.5194/sprs-archives-XLI-4-W4-209-2017)]([https://doi.org/10.5194/sprs-archives-XLI-4-W4-209-2017](https://doi.org/10.5194/sprs-archives-XLI-4-W4-209-2017)) © Authors 2017. CC BY 4.0 License.
\end{table}
Table 4: Confusion Matrix of ANN12-C4
Figure 2: (a) The classification accuracy and (b) running time of SVM over six feature combinations and four types of kernel function, i.e. linear, quadratic, cubic and Gaussian; (c) the relationship between running time and classification accuracy of the best performance of six feature combinations
## 4 Conclusion
Remote sensing technology as a non-destructive method for road surface inspection has been widely used in road departments nowadays. UAV is one flexible platform that can be configured with different kinds of remote sensing sensors to monitor the pavement condition. Compared with the conventional vehicle-based PMS system, the UAV remote sensing system can acquire the full pavement images of different lanes simultaneously and does not have significant impact on the normal traffic. Moreover, benefit from the full coverage of the pavement, different kinds of pavement distresses can be extracted from UAV images at the same time. In this study, a set of digital pavement images acquired by UAV and four popular learning algorithms (KNN, SVM, ANN, RF) were used to identify the road surface damages. It can be concluded that each kind of learning algorithms when given a specific set of parameters and features can achieve a high classification accuracy (over 98%) while using less computational time. Finally, taking account of the classification accuracy and running time together, four best models for each kind of learning algorithms were recommended, which all have the best performance on the detection of pavement potholes and cracks. It includes the KNN with K being 4 and feature combination of geometric and textural features, the SVM with linear kernel and feature combination of spectral, geometric and textural features, the ANN with 12 nodes in hidden layer and feature combination of spectral and geometric features, the RF with 18 trees and feature combination of spectral and geometric features. Among the four best models, the RF could get the best performance with a higher classification accuracy and minimum running time. In the future, more pavement images acquired by UAV should be used to further evaluate the performance of these best models on the detection of potholes and cracks. Other kinds of remote sensing data including LiDAR and Radar by UAV also have a great potential ability in the pavement condition monitoring. Additionally, other advanced learning algorithms could also be introduced into the pavement distresses detection, such as convolutional neural networks.
## Acknowledgements
This study was financially supported by two grants from the National Natural Science Foundation of China (No. 41571331) and from Xinjiang Production and Construction Corps (No. 2016 AB001).
Figure 4: (a) the classification accuracy and (b) running time of Random Forest over a series of numbers of trees; (c) the relationship between running time and classification accuracy of the best performance of six feature combinations
## References
* [PERSON] (2001) [PERSON], 2001. \"Random forests\". _Machine learning_, 45(1), 5-32.
* [PERSON] et al. (2016) [PERSON], [PERSON], & [PERSON], 2016. \"Detection of cracks in Paved Road Surface Using Laser Scan Image Data\". _International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences_, XLI-B1:559-562
* [PERSON] et al. (2003) [PERSON], [PERSON], [PERSON], & [PERSON], 2003. \"Image segmentation for the purpose of object-based classification\". _IEEE International Geoscience and Remote Sensing Symposium 2013_, 3:2039-2041.
* [PERSON] et al. (2008) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON], 2008. \"The pothole patrol: using a mobile sensor network for road surface monitoring\". _The Proceedings of the 6 th international conference on Mobile systems, applications, and services_, June 17-20, 2008, Breckenridge, Colorado, USA.
* [PERSON] et al. (1994) [PERSON], [PERSON], & [PERSON], 1994. Modern pavement management, Krieger Publishing, 1994.
* [PERSON] et al. (1986) [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON], 1986. \"Pavement Condition Index (PCI) for Flexible Pavements\". _Defects_, 1986.
* [PERSON] & [PERSON] (2012) [PERSON], & [PERSON], 2012. \"Sensor correction of a 6-band multispectral imaging sensor for UAV remote sensing\". _Remote Sensing_, 4(5), 1462-1493.
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON], 2015. \"A review on computer vision based defect detection and condition assessment of concrete and asphalt civil infrastructure\". _Advanced Engineering Informatics_, 29(2), 196-210.
* [PERSON] & [PERSON] (2007) [PERSON], & [PERSON], 2007. \"Accuracy of pavement thicknesses estimation using different ground penetrating radar analysis approaches\". _NDT & E International_, 40(2), 147-157. doi:10.1016/j.ndeint.2006.09.001
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON], 2015. \"Monitoring asphalt pavement damages using remote sensing techniques\". _The Third International Conference on Remote Sensing and Geoinformation of the Environment (RSCy2015)_, 95350S (June 19, 2015); doi:10.1117/12.2195702.
* [PERSON] et al. (2016) [PERSON], [PERSON], & [PERSON], 2016. \"Comparison of Supervised Classification Techniques for Vision-Based Payement Crack Detection\". _The Transportation Research Board 95 th Annual Meeting_, January 13-17, 2013 Washington, DC, pp 119-127.
* [PERSON] & [PERSON] (2009) [PERSON], & [PERSON], 2009. \"Supervised crack detection and classification in images of road pavement flexible surfaces\". _INTECH_, 2009, 100(8), doi: 10.5772/7448.
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON], 2017. \"Mapping asphalt pavement aging and condition using multiple endmember spectral mixture analysis in Beijing, China\". _Journal of Applied Remote Sensing_, 11(1), 016003-016003. doi:10.1117/1.JRS.11.016003
* [PERSON] & [PERSON] (2010) [PERSON], & [PERSON], 2010. \"Automatic asphalt pavement crack detection and classification using neural networks\". _12 th Biennial Baltic Electronics Conference_, Estonia, 329(2):345-348.
* [PERSON] et al. (2015) [PERSON] [PERSON], [PERSON], [PERSON], & [PERSON], 2015. \"Review of remote sensing methodologies for pavement management and assessment\". _European Transport Research Review_, 7(2), 1-19. doi:10.1007/s12544-015-0156-6
* [PERSON] et al. (2008) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2008. \"Textural and local spatial statistics for the object-oriented classification of urban areas using high resolution imagery\". _International Journal of Remote Sensing_, 29(11), 3105-3117. doi:10.1080/01431160701469016
* [PERSON] et al. (2008) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON], 2008. \"Automatic recognition of pavement surface crack based on Bp neural network\". _International Conference on the Computer and Electrical Engineering_, 2008:19-21.
* [PERSON] & [PERSON] (2007) [PERSON], [PERSON], & [PERSON] 2007. \"ML-KNN: A lazy learning approach to multi-label learning\". _Pattern recognition_, 40(7), 2038-2048.
* [PERSON] & [PERSON] (2014) [PERSON], [PERSON], & [PERSON], 2014. \"Use of Low-cost Remote Sensing for Infrastructure Management\". _The Construction Research Congress 2014_, Atlanta, 1299-1308.
|
isprs
|
OBJECT-BASED AND SUPERVISED DETECTION OF POTHOLES AND CRACKS FROM THE PAVEMENT IMAGES ACQUIRED BY UAV
|
Y. Pan, X. Zhang, M. Sun, Q. Zhao
|
https://doi.org/10.5194/isprs-archives-xlii-4-w4-209-2017
| 2,017
|
CC-BY
|
isprs/168e40b7_bb77_4dd6_9c5a_b39cbebc2f05.md
|
# Automatic Generation of High-Resolution Thermal 3D Building Envelope Models exploiting UAV Imagery
[PERSON]
[PERSON]
[PERSON]
Stephan Nebiker
Institute of Geomatics, FHNW University of Applied Sciences and Arts Northwestern Switzerland, 4132 Muttenz, Switzerland - (elia.ferrari, jonas.meyer, stephan.nebiker)@fhhw.ch, [EMAIL_ADDRESS]
###### Abstract
Buildings are major contributors to global energy consumption, with the thermal performance of their envelopes playing a crucial role. Detecting thermal bridges, which compromise insulation, is essential for energy efficiency. To efficiently detect thermal bridges, thermal infrared (TIR) imagery is widely used through visual inspections, more recently by exploiting sensors mounted on unmanned aerial vehicles (UAVs). While RGB images have been extensively used in Structure-from-Motion and Multi-View Stereo processes, applying these techniques to TIR images presents challenges due to lower resolution and inconsistent colour spaces. To overcome the challenges posed by TIR imagery, approaches from different fields investigated the integration of TIR images with other data to support the alignment. Our approach improves upon these methods by using a DJI Mavic 3 Enterprise Thermal UAV to collect RGB and TIR datasets simultaneously. Our guided image alignment and camera rig estimation approach accounts for unknown camera calibration, misalignment, and lever arm parameters, ensuring robust alignment of TIR images with a total error of 5 pixels. With this approach, the geometric accuracy of the resulting point cloud reached an RMSE of 0.13 m. Finally, thermal calibration values collected on site were applied to correct the thermal images, improving temperature value accuracy for 3D model texturing with a temperature deviation of 2.8 \({}^{\circ}\)C. The developed method requires no prior camera calibration, TIR image pre-processing, or ground control points, permitting a complete automation of the process.
TIR, Thermal inspection, 3D model, 3D reconstruction, Camera alignment, UAV. +
Footnote †: journal: Physics Letters B
## 1 Introduction
### Motivation
Buildings make significant contributions to global energy consumption and greenhouse gas emissions. The thermal performance of the building envelope has a significant impact on overall energy consumption. To efficiently detect thermal bridges (weak points in the thermal insulation), thermal infrared (TIR) imagery is widely used through visual inspections ([PERSON] et al., 2020). However, often true to scale data, such as point clouds or 3D models of building envelopes, enriched with thermal information, are needed for locating heat leaks for for planning refurbishments of buildings. Compared to RGB images, TIR images show lower geometric and radiometric resolution, lower dynamic range ([PERSON] et al., 2013), are affected by unsharp definitions of discontinuities and small details (e.g. blurred edges) ([PERSON] et al., 2018). Hence, structure from motion (sfM) and multi view stereo (MVS) based processing alone is often insufficient in terms of spatial resolution and accuracy ([PERSON] and [PERSON], 2019). Various strategies focusing on the fusion of TIR images with RGB images ([PERSON] et al., 2022), point clouds ([PERSON] et al., 2023; [PERSON] and [PERSON], 2018; [PERSON] et al., 2020; [PERSON] et al., 2019) or 3D models ([PERSON] et al., 2011; [PERSON] and [PERSON], 2017) from other sources address these geometrical issues. However, such approaches impair their applicability due the complexity of tasks such as TIR camera calibration, operation of high-end data acquisition systems, TIR image pre-processing, and image alignment, as well as the dependence on available and current data when relying on external sources, like 3D city models.
Early research with experimental multi-head sensor systems for Unmanned Aerial Vehicle (UAV) systems date back more than 15 years ([PERSON] et al., 2008). With the advent of commercial off-the-shelf UAV systems with multi-sensor heads, RGB and TIR images can be easily acquired simultaneously ([PERSON], 2024). If the UAV additionally includes an RTK-GNSS module, accurate pose priors can be determined. Such UAV systems allow for the efficient capturing of buildings in just a few minutes. SfM based 3D model generation, especially alignment of TIR images, however, still poses major challenges and requires elaborate workflows ([PERSON] et al., 2022).
In this paper we propose a fully automated process, based on the SfM software Agisoft Metashape (Agisoft LLC, 2023) to create thermal 3D building envelope models. We only use simultaneously captured RGB and TIR images from a UAV and accurate pose priors, without the need for GCPs, TIR image pre-processing or known camera calibration. Our main contributions are:
* A guided image alignment and rig estimation process
* A fully automated process from raw RGB and TIR to a 3D thermal building envelope model
* A qualitative and quantitative analysis of first results
### Related Work
The number of applications and studies using UAVs has rapidly increased in recent years due to several reasons. UAVs have become more affordable and dependable, and they can be equipped with different sensors and used for multiple applications. In particular, the use of TIR sensors has been researched in a number of works, ranging from agriculture and forestry ([PERSON] et al., 2017), heritage asset documentation ([PERSON] et al., 2022) to buildings and infrastructures thermal analysis ([PERSON] et al., 2020; [PERSON] et al., 2022).
Standard procedures for RGB imagery processing in mapping and 3D modelling now employ modern SfM-MVS workflows ([PERSON] and [PERSON], 2021). In recent years, these approaches have been applied to thermal infrared images to obtain products such as point clouds or 3D models of buildings with thermal information ([PERSON] et al., 2022).
In the approach of [PERSON] et al. (2020), they collected imagery with a UAV equipped with a professional TIR camera, focusing on the 3D reconstruction directly from TIR images. They subsequently applied a thermal correction to the imagery using the temperature deviation of aluminium foil employed for the targets as reference. The analysis with the collected reference measurements showed an absolute temperature accuracy of 5\({}^{\circ}\)C. Large scale approaches cannot rely on single targets of aluminium foil. Therefore, [PERSON] et al. (2024) applied the Thermal Urban Road Normalization algorithm developed by [PERSON] et al. (2014) to interpolate temperature deviation within a scene and normalize TIR imagery, based on the assumption of roads as pseudo invariant objects. Conversely, [PERSON] et al. (2018) exploited TIR imagery collected with a plane to generate a large scale thermal orthomosaic of a city. In their approach they added a pre-processing step, evaluating different radiometric enhancement methods. This improved the effectiveness of the process with SfM algorithms, generating more tie points in the sparse cloud and a slightly higher density of the dense cloud.
[PERSON] et al. (2022) highlighted the challenges of 3D reconstructions directly from TIR imagery, which have a limited field of view (FoV) as well as low spatial resolution and therefore generate incomplete point clouds. In addition, the image alignment is made more arduous by the lack of distinctive features.
To face the previously mentioned challenges associated with TIR images, different approaches have been developed. [PERSON] and [PERSON] (2017) exploited an existing 3D building model and its uncertainties for a co-registration of TIR imagery, refining the exterior orientation parameters of the camera. Other studies instead exploited data collected by newly developed low-cost multi-sensor UAVs, capable of simultaneously capturing visible and thermal infrared images. This offers significant advantages for a combined RGB and TIR image alignment with an improved geometric quality of the 3D reconstruction ([PERSON] et al., 2022; [PERSON] et al., 2023). In these studies, dual-sensor datasets were used to investigate the advantages of an integrated reconstruction approach, evaluating them on single buildings or facades. [PERSON] et al. (2023) implemented a rectification method for TIR images to allow the fusion with RGB images and validated it on a building facade with a fusion error of 5.7 pixels between RGB and TIR images. In contrast, the approach of [PERSON] et al. (2022) implemented a three-step workflow to align the images, using the estimated poses of RGB images to improve the TIR image alignment. However, this approach assumes that no misalignment or lever arm between the two cameras persists. The resulting average checkpoint RMSE showed values about 40% higher than the Ground Sampling Distance (GSD) and in one case inferior values than the 3D reconstruction with pure TIR images. Similarly, [PERSON] et al. (2017) adopted the estimated poses of RGB images to enhance the TIR image alignment. Additionally, they also adopted the previously estimated camera parameters from a camera calibration. In contrast, [PERSON] et al. (2020) showed that thermal 3D models benefit from the combination of RGB and TIR images. They performed image alignment (RGB only) and projected thermal information to 3D points by known pre-calibrated lever arm and misalignment values. Despite the added value of a pre-calibration of the thermal camera, low-cost dual-sensors systems are usually unstable, and the internal camera parameters can therefore vary from flight to flight. Thus, for UAV-based data collection a self-calibration is preferable ([PERSON] et al., 2023).
[PERSON] et al. (2021) presented an approach to determine absolute temperature values from a point cloud generated using RGB images. In their investigations, the data collected with a dual-sensor system have been combined, removing the image distortions and projecting the TIR images on the RGB images via transformation matrices. In this way the temperature values can be assigned to the corresponding points of the point cloud. Accordingly, [PERSON] et al. (2022) exploit dual-sensor data to create a point cloud augmented with thermal information. They adopted a fixed camera system to remove TIR images onto RGB images exploiting the known relative rotation and translation between the two cameras. The method is computationally expensive and requires an additional visibility test for a correct interpretation. As an alternative, they generated four-channel images adding the TIR information to the RGB images and recomputing the image alignment with the new images. This resulted in faster image processing. However, it resulted in lower accuracies of the checkpoints in comparison to the first method.
A combined evaluation of RGB and TIR images should enable a more robust and accurate generation of thermal 3D building models. Such geometrically precise 3D models would allow a much better assessment of the entire building envelope and planning of renovation measures.
## 2 Materials and Methods
### Instruments
In this paper, a DJI Mavic 3 Enterprise Thermal (M3T) is used for data acquisition. The UAV is equipped with a multi-sensor camera head, which can capture RGB and TIR images simultaneously. The thermal data is recorded with an accuracy of 2 \({}^{\circ}\)C and an image resolution of 0.3 MP, while the wide-angle camera could capture 48 MP RGB images. However, the UAV system limits the wide-angle camera resolution to 12 MP if RGB and TIR data are collected simultaneously. As [PERSON] and [PERSON] (2023) showed, another limitation is caused by the electronic shutter of the UAV, which directly influences the flight plan and speed due to the rolling shutter effect. However, in comparison to the UAV used by [PERSON] and [PERSON] (2023), a DJI Phantom 4 Pro, the electronic shutter of the M3T performs a full sensor readout two and half times faster and does not need post-processing compensation in low-speed flight mode, as demonstrated by [PERSON] (2024). In addition, the UAV was equipped with a real-time kinematic (RTK) module, which determines image positions with an accuracy of up to 1 \(\text{ cm}+1\) ppm horizontally and 1.5 \(\text{ cm}+1\) ppm vertically (DJI, 2023).
To validate absolute temperature values, reference measurements were carried out with a FLIR E40 thermal camera, which enabled independent thermal measurements with an accuracy of 2 \({}^{\circ}\)C (FLIR, 2011).
Along with reference temperature measurements, checkpoints have been distributed to enable an accuracy analysis of the results. The targets were measured using a combination of RTK GNSS Rover and total station, allowing the data to be collected within a standard deviation of \(\div\)5 cm.
### Study Area
The study area is a detached building in a rural environment. The building is inhabited and heated. It is equipped with a photovoltaic installation on the roof and, according to construction documentation, is poorly insulated. The surrounding terrain has a gentle uniform slope in one direction and just a few elements that obstruct the view, such as trees. As illustrated in Figure 1, seven checkpoints, five on the ground and two on the facade, have been materialised, to provide information about the accuracy of the generated products during analysis.
No GCPs were placed to maintain fully automated data processing without operator intervention. All checkpoints have been realised using special targets made of aluminium foil, which has a low thermal emissivity. As shown in Figure 2, the targets are clearly visible in the RGB and TIR images. The coordinates of all targets on the ground have been determined by multiple RTK-GNSS measurements, while additional checkpoints on the facades have been measured with a total station. All checkpoints are expected to have a standard deviation of less than 5 cm.
### Data Acquisition
The study area was captured with a UAV DJI M3T, described in section 2.1. To completely capture the building and its facades, which are partly obscured by caves and balconies, 362 RGB and TIR images were captured in three configurations: nadir, oblique and close-range (Figure 3). Nadir and oblique images were captured from 30 metres above ground and an angle of 45\({}^{\circ}\) was chosen for the oblique configuration. The close-range images were recorded with an average object distance of 5 metres. Table 1 shows the number of images per configuration and the average GSD for RGB and TIR in each configuration.
The flight mission was conducted in mid-December 2023 in the early morning to ensure minimum solar irradiation on the one hand, and good quality RGB images on the other. At the start of the mission, a thermal sensor calibration was performed to establish the optimal temperature range and emissivity of the measured object. In addition, environmental values, such as distance from the object, humidity, emissivity and reflected temperature, have been registered to automatically adjust the thermal measurements in post processing. The maximum speed of the UAV was set to 3 m/s resulting in a data acquisition time of around 16 minutes. To enable georeferencing, the RTK module of the UAV was used to record precise image poses and to facilitate the image alignment process. At the same time, a FLIR E40 thermal imaging camera was used to collect reference data before and after each flight to analyse the absolute temperature values in order to ensure an average value valid for the entire flight. The measurements were taken at a distance of around 2 meters from the building facade, on window frames or thermal bridges of the construction, which were also visible on UAV images. To this end, three points of interest on the north, west and east facade have been identified and measured.
### Guided Image Alignment and Camera Rig Estimation
To create a combined 3D model with thermal information, both RGB and TIR images need to be aligned precisely. Standard image alignment procedures usually consider each image separately. While the alignment of RGB imagery with sufficient overlap has become a standard procedure, several works show that the alignment of TIR images still poses challenges caused by low geometric and radiometric resolution as well as small fields of views ([PERSON], 2021; [PERSON] et al., 2022; [PERSON] et al., 2020). Furthermore, the establishment of matches between RGB and TIR images is likely to fail due to radiometric differences and lack of distinctive features on the TIR images ([PERSON] et al., 2022). To increase the stability of the image alignment process we define a multi-sensor rig where the RGB camera is the primary sensor and the TIR camera the secondary sensor defined by its relative orientation (lever-arm \(T_{RGB}^{RGB}\) and misalignment \(R_{RGB}^{RGB}\)) with respect to the primary sensor. Since the calibration parameters of both, the RGB and TIR camera are unknown, a self-calibration of both cameras is performed. However, the translational aspect of the relative orientation correlates strongly with the TIR camera's focal length and principal point, meaning that these camera parameters can also be described by shifting the relative orientation accordingly ([PERSON], 2010).
Initial tests showed that the alignment of the RGB images was successful, but the alignment of the TIR images failed when estimating all unknown parameters during bundle adjustment. According to [PERSON] and [PERSON] (2006) we assume that the
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline configuration & GSD & mm & Total & images & forward / \\ \cline{2-5} & RGB & TIR & RGB / TIR & side overlap \\ \hline nadir & 10.8 & 39.6 & 60 & 80\% / 75\% \\ \hline oblique & \multirow{2}{*}{15.3} & \multirow{2}{*}{56.0} & \multirow{2}{*}{172} & \multirow{2}{*}{70\% / 75\%} \\ (centre pixel) & & & & \\ \hline close range & 1.8 & 6.6 & 130 & – / 75\% \\ \hline \end{tabular}
\end{table}
Table 1: Ground sampling distances and number of images per image type and configuration.
Figure 1: Study object and checkpoint distribution.
Figure 3: Configuration of different flight missions (nadir, oblique and close range) to completely capture the building of interest.
Figure 2: Example of the point materialization. Classical target enhanced with aluminium foil (1) and a cross made of aluminium foil (2) in an RGB (left) and TIR image (right).
bundle adjustment process fails because of the strong correlation of the lever-arm and the intrinsic camera parameters in combination with a weak network geometry due to few key point correspondences in the TIR images.
Under the assumption of accurate image pose priors (0.05 m and 10\({}^{\circ}\) standard deviation for position and rotation components respectively) we developed a guided image alignment and camera rig estimation process consisting of three main steps:
1. **Initial image alignment** Projection centres of RGB and TIR sensors are identical (lever-arm \(T_{TR}^{RGB}=(0,0,0)^{T}\)). Estimation of misalignment and all intrinsic camera parameters of the RGB and TIR sensor except for the focal length of the TIR sensor, which is fixed to the initial value obtained from metadata, during the bundle adjustment.
2. **Camera optimization and lever-arm estimation** Introduction of the lever-arm \(T_{TR}^{RGB}=(-0.02,0,0)^{T}\) (manually measured values in metres) with a standard deviation of 0.001 m. Misalignment and camera intrinsics parameters are treated as in the previous step.
3. **Camera optimization and TIR camera calibration** Estimation of all values of the TIR camera.
For step b) and c) the estimated values of the previous steps are used as approximate values for the current optimization step. The introduced standard deviation for the camera poses and the lever-arm are left unchanged. Additionally, rolling shutter compensation is disabled due to the low-speed flight mode used and the previously described shutter speed of the DJI M3T (section 2.1).
### Temperature Values Correction and Conversion
The TIR images collected with the M3T are saved as radiometric JPEGs, which is a binary format just used to display the coloured TIR images. With the purpose of enabling further processing, the temperature information encoded in the radiometric JPEG files needs to be converted as it cannot be read directly. With the help of the DJI Thermal SDK, the data have been corrected using the collected object distance, humidity, emissivity and reflected temperature and then saved as standard raw files with just one channel containing the corrected absolute temperature values for each pixel (DJI, 2022).
### Process Automation
In the last part of our approach, we focused on the automation of the whole workflow depicted in Figure 4, integrating all the processing steps within a script implemented in Python. To this end, we exploited the Application Programming Interface (API) of the Agisoft Metashape (Aigisoft LLC, 2023) software to automate the image alignment and camera rig estimation, as well as for dense point cloud and 3D model generation. The integration of the DJI Thermal SDK in the script allowed the automation of thermal image conversion and correction. Finally, the converted images were replaced and automatically processed for the texture generation, again with the Agisoft Metashape API. This enabled the fully automated generation of a realistic and high-resolution thermal 3D building envelope model with absolute temperature values.
### Point Cloud and Texture Evaluation
After defining the camera rig and image alignment strategy, both geometrical accuracy and absolute temperature accuracy of the results were evaluated. For the geometrical analysis, the dense point cloud resulting from the aligned RGB images has been used as reference for a comparison with both, the point cloud from the aligned TIR images and the point cloud resulting from the combined alignment of TIR and RGB images. To this end, the same point cloud section has been extracted from all three point clouds and the deviation from the reference has been computed.
Due to the lower geometric resolution, the image alignment of TIR images is significantly less accurate than the alignment of RGB images ([PERSON] et al., 2018). In order to investigate the impact on the final product, a 3D mesh was generated using in turn the combined point cloud, RGB and TIR images, and the point cloud resulting from RGB images only. The resulting meshes were textured with TIR images aligned with the combined approach and with RGB images from standard alignment, respectively. The measured checkpoints attached to the facades were used to calculate the deviation of the TIR texture from the RGB texture (reference).
Finally, the evaluation of the absolute temperature values aims to compare the generated thermal texture generated from the corrected TIR images with the reference values of the FILR 40 thermal camera. For this analysis, the measurements of the three reference points on the building's facades were compared with the measurements on the UAV's TIR imagery. In this context, the thermal values have been calculated from the average of six UAV's images, whereby the average temperature of all values within a radius of six pixels was used for each image.
Figure 4: Workflow of our automated photogrammetric process.
## 3 Experiments and Results
### Guided Image Alignment and Rig Estimation
Our proposed image alignment and camera rig estimation process is evaluated by measuring all seven checkpoints (Figure 1) within the aligned images. This is performed for both TIR and RGB images independently. The residuals in checkpoint coordinates are calculated after each of the three steps of the proposed approach: a) initial camera alignment (with identical projection centres for TIR an RGB images, and fixed focal length of TIR camera); b) camera optimization and lever-arm estimation; c) camera optimization and TIR camera calibration. Table 2 shows the average residuals for all seven checkpoints for TIR and RGB images respectively. The residuals are provided as 2D and 3D coordinate differences in object space and as pixel differences in image space.
Table 2 shows that the residuals of the checkpoints measured in TIR images are lower after our proposed guided image alignment and camera-rig estimation process. However, the estimation of the lever-arm without optimizing the focal length of the TIR camera results in slightly higher residuals than assuming that both projection centres are identical. In contrast, the residuals of checkpoints measured in RGB images do not change after the initial alignment process, meaning that the estimation of the lever-arm, misalignment and intrinsic camera parameters have no influence on the RGB image poses.
### Point Cloud and Texture Evaluation
#### 3.2.1 Point Cloud Evaluation
As introduced in section 2.7, the three resulting point clouds, from RGB images only, from TIR images only and from the combination of RGB and TIR images, were compared. In this comparison the first one served as reference to examine the deviations of the latter two.
The geometric accuracy analysis of the combined point cloud showed that around 40% of the points show less than 2 cm deviation from the reference. 90% of the points show differences of up to 5 cm from the reference, resulting in a total RMSE of 0.13 m. Similarly, the comparison of the point cloud generated with aligned TIR images only with the reference point cloud resulted in a deviation of 2 cm for 45% of the points. However, only 77% of the points showed a deviation from the reference point cloud of less than 5 cm, resulting in a higher total RMSE of 0.19 m.
The point cloud resulting from the combined method showed a remarkably high point density of 2102 pts/m\({}^{2}\), comparable to the point cloud resulting from pure RGB image alignment with 2154 pts/m\({}^{2}\). In contrast, the point cloud resulting from TIR images only has significantly higher noise and about one third of the density of the reference point cloud (642 pts/m\({}^{2}\)).
#### 3.2.2 Texture Evaluation
The investigation showed deviations of the TIR texture from the RGB texture averaging 5.5 cm in position (2D) and averaging 3 cm in height. The worst value has been encountered at the check point CIKP5, placed on the west facade, with a difference from the reference (RGB texture) of 10 cm in position. As shown in section 3.1 the accuracy of the TIR image alignment with residuals of 4.9 pixels is significantly lower than that of the RGB images with 0.95 pixels (Table 2). Considering the GSD of the close-range images with 6.6 mm for TIR images and 1.8 mm for RGB images (Table 1), the obtained differences can be explained by the uncertainty of the image alignment and the resulting deviations in the object space.
In addition, as shown in Figure 5, a visual comparison of the two textures was carried out. It showed that the structure of the photovoltaic system and the different insulation layers between the ground floor and the first floor are easily recognisable and can be correctly located in the 3D model. Despite a maximum deviation of 10 cm, the overall models showed satisfactory results for the application of a thermal 3D model.
### Absolute Temperature Values
In the analysis of the accuracy of absolute temperature values, the reference temperature values of the FLIR E40 have been compared with those calculated from the M3T imagery, as described in section 2.5. The differences summarized in Table 3, show no systematic deviation and lie within the simple standard deviation of the instrument accuracy of 2.8 \({}^{\circ}\)C.
### Discussion
The combined processing of RGB and TIR images is beneficial for image alignment as it can address the challenges posed by TIR imagery. This can avoid using radiometric enhancement methods to improve TIR image alignment and prevent applying two transformations to obtain the original temperature values ([PERSON] et al., 2018). An additional challenge arises from the differing ideal conditions required for capturing RGB and TIR data simultaneously. RGB image quality depends on adequate natural light, requiring daylight conditions to capture high
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline Imagery & \multicolumn{2}{c|}{TIR} & \multicolumn{2}{c|}{RGB} \\ \hline \multirow{3}{*}{Residuals} & object space & image & object space & image \\ & 2D / 3D & space & 2D / 3D & space \\ & [m] & [pix] & [m] & [pix] \\ \hline (a) & 0.101 / 0.245 & 5.29 & 0.030 / 0.055 & 0.95 \\ (b) & 0.101 / 0.251 & 5.34 & 0.030 / 0.055 & 0.95 \\ (c) & 0.095 / 0.232 & 4.92 & 0.030 / 0.055 & 0.95 \\ \hline \end{tabular}
\end{table}
Table 2: Residuals of checkpoint observations in TIR and RGB images after each step of the proposed image alignment process.
Figure 5: Sections of RGB texture (in the background) and thermal texture (circular, in the centre): roof view (left), east facade (right).
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline Reference & FLIR E40 & UAV M3T & \(\Delta\)T \\ point & (reference) & (actual) & (reference \\ & Average / & Average / & - actual) \\ & Std. Dev. & Std. Dev. & \\ \hline East facade & 2.1 \({}^{\circ}\)C \(\pm\) 0.8 & 0 \({}^{\circ}\)C \(\pm\) 0.1 & 1.3 \({}^{\circ}\)C \\ \hline West facade & -3.4 \({}^{\circ}\)C \(\pm\) 0.7 & -0.6 \({}^{\circ}\)C \(\pm\) 0.3 & -2.8 \({}^{\circ}\)C \\ \hline North facade & -0.5 \({}^{\circ}\)C \(\pm\) 0.3 & -3.2 \({}^{\circ}\)C \(\pm\) 0.2 & 2.7 \({}^{\circ}\)C \\ \hline \end{tabular}
\end{table}
Table 3: Absolute temperature differences between average reference value (FLIR E40 thermal cameras) and average value of UAV (M3T) TIR imagery.
contrast, sharp visuals. In contrast, TIR imaging benefits from minimal solar radiation, as thermal readings become more reliable and less influenced by external heat sources, such as direct sunlight. This discrepancy between real and ideal conditions presents a trade-off when collecting integrated RGB-TIR data, as optimisation for one sensor may compromise the other. This reduces the possibilities for data collection around twilight periods when ambient light is low enough to avoid substantial heat from solar radiation, yet sufficient for capturing usable RGB images.
A further consideration is the choice of sensor used for flight planning, which directly affects either the TIR or RGB imagery. In this study, the RGB sensor was selected for mission planning. While this ensured adequate overlap and coverage in RGB images, it led to lower overlap in TIR images due to the narrower FoV and consequently impacting image alignment accuracy and point cloud density of TIR images. On the other hand, using the TIR sensor for mission planning results in capturing more RGB images, leading to increased data volume and processing demands.
Integrating all processing steps within a single script automates the entire workflow, including guided image alignment and camera rig estimation, absolute temperature value correction and conversion, 3D modelling and texturing, thus eliminating time-consuming manual interactions. The automated pipeline leverages RTK-GMSS positioning instead of GCPs, simplifying and speeding up data acquisition and processing. However, the reliance on RTK-GMSS technology brings with it a dependency for the automation process, which could limit the application to data coming from UAVs equipped with RTK-GMSS only.
The proposed guided image alignment and camera rig estimation steps provided an alignment accuracy of approximately 5 pixels, slightly higher than the 3-4 pixels accuracy obtained in GCP-supported workflows ([PERSON] et al., 2018). While adequate for generating visually accurate thermal 3D models, this variance suggests that the use of GCPs could further benefit the alignment accuracy. However, special coded targets should be employed, with the intention of guaranteeing the complete automation of the process.
The data fusion allows for generating a thermal point cloud with the higher accuracy and density of a RGB point cloud. The inclusion of both RGB and TIR imagery enhances the quality of the 3D point cloud, not just generating a point cloud density like the standard RGB point cloud, but also reaching a maximal deviation from the reference point cloud of 5 cm for 90% of the points. The total RMSE of 0.13 m can be considered as good in comparison with results obtained with other methods, which yielded a RMSE between 0.2 and 0.22 m ([PERSON] et al., 2020). However, using the RGB point cloud as reference a dependency between TIR and RGB data is introduced, since both data have been collected simultaneously with the same platform. A feasible alternative for the geometric analysis consists in an independent measurement system, such as a terrestrial laser scanner, which can provide additional and unrelated information about the geometry accuracy of the approach.
The deviation between RGB and TIR textures presented in section 3.3 most likely result from a combination of inaccurate parameters of the external orientation of the TIR sensor and stitching errors due to the low geometric resolution and dynamic range of TIR imagery. Possibilities to further increase the accuracy of the thermal textures could lie in a more precise approach for estimating the relative orientation (misalignment and lever arm) and a compromise in flight planning to ensure greater overlap of the TIR images.
The evaluation of the absolute temperature values in section 3.4 supports the method's applicability for thermal assessment, as it allows for effective representation of temperature variation across building surfaces with an accuracy higher than other approaches ([PERSON] et al., 2022; [PERSON] et al., 2020). It also demonstrates that, the temperature correction with calibration values is a crucial component for obtaining accurate thermal values.
## 5 Conclusions and Outlook
In this study, we presented an automated workflow for creating 3D thermal models using data from a dual-camera UAV system equipped with an RTK-GMSS module. By leveraging simultaneously captured RGB and TIR images with multi-camera head and eliminating the need for GCPs, we achieved a fully automated process. The implemented three-step method for a guided image alignment and camera rig estimation enabled a combined processing of TIR and RGB images, resulting in alignment residuals of approximately 5 pixels at measured checkpoints. Furthermore, from these aligned images, a TIR point cloud with an enhanced density was generated, with a deviation from the reference under 5 cm for 90% of the points and a total RMSE of 0.13 m. Finally, the temperature corrections applied to TIR images produced thermal textures with a standard deviation of 2.8 \({}^{\circ}\)C. While differences in optimal conditions for capturing RGB and TIR imagery pose limitations for simultaneous data collection, the integration of TIR and RGB datasets enhances the visualization and analysis of building thermal performance.
Future research could focus on refining alignment accuracy by developing automated GCPs measurement methods with coded targets and improving the estimation of camera calibration, lever-arm and misalignment of multi-sensor heads. Additionally, validating this approach with an independent system, such as terrestrial laser scanning, could provide further insights into its geometric accuracy. Incorporating facade-mounted reference sensors during data acquisition could also enhance the reliability of absolute temperature values. These advancements would further improve the method's accuracy.
## References
* Agisoft LLC (2023) Agisoft LLC, 2023. Agisoft Metashape Professional, Version 2.1.2.
* [PERSON] (2021) [PERSON], 2021. Photogrammetric analysis of multispectral and thermal close-range images. Mersin Photogrammetry Journal, 3, 29-36. doi:10.53093/mephoj.919916.
* [PERSON] et al. (2011) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2011. Mapping Infrared Data on Terrestrial Laser Scanning 3D Models of Buildings. Remote Sensing, 3(9), 1847-1870. doi:10.3390/rs3091847.
* [PERSON] and [PERSON] (2023) [PERSON], [PERSON], [PERSON], 2023. Experimental Tests and Simulations on Correction Models for the Rolling Shutter Effect in UAV Photogrammetry. Remote Sensing, 15(9), 2391. doi:10.3390/rs15092391.
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], 2018. Structure from Motion for aerial thermal imagery at city scale: Pre-processing, camera calibration, accuracy assessment. ISPRS Journal of Photogrammetry and Remote Sensing, 146, 320-333. doi:10.1016/j.ispspsps.2018.10.002.
** [PERSON] (2021) [PERSON], [PERSON], 2021. Accuracy of Unmanned Aerial Systems Photogrammetry and Structure from Motion in Surveying and Mapping: A Review. J Indian Soc Remote Sens, 49(8), 1997-2017. doi:10.1007/s12524-021-01366-x.
* DJI Mavic 3 Enterprise
- DJI Enterprise. [[https://enterprise.dij.com/mavic-3-enterprise/photo](https://enterprise.dij.com/mavic-3-enterprise/photo)]([https://enterprise.dij.com/mavic-3-enterprise/photo](https://enterprise.dij.com/mavic-3-enterprise/photo)) (10 May 2024).
* DJI (2022) DJI, 2022. DJI Thermal SDK, Version 1.4.
* Dlesk and Vach (2019) [PERSON], [PERSON], [PERSON], K., 2019. POINT CLOUD GENERATION OF A BUILDING FROM CLOSE RANGE THERMAL IMAGES. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-5/W2, 29-33. doi:10.5194/isps-archives-XLII-5-W2-29-2019.
* Dlesk and Vach (2022) [PERSON], [PERSON], [PERSON], K., [PERSON], K., 2022. Photogrammetric Co-Processing of Thermal Infrared Images and RGB Images. Sensors, 22(4), 1655. doi:10.3390/s22041655.
* [PERSON] et al. (2023) [PERSON], [PERSON], [PERSON], 2023. Multi-modal image matching to colorize a SLAM based point cloud with arbitrary data from a thermal camera. ISPRS Open Journal of Photogrammetry and Remote Sensing, 9, 100041. doi.org/10.1016/j.opbto.2023.100041.
* FLIR (2011) FLIR, 2011. FLIR E-Serie. [[https://www.flir-infrademaras.de/WebRoot/Store12/Shops/61587589/4](https://www.flir-infrademaras.de/WebRoot/Store12/Shops/61587589/4) DFB9]([https://www.flir-infrademaras.de/WebRoot/Store12/Shops/61587589/4](https://www.flir-infrademaras.de/WebRoot/Store12/Shops/61587589/4) DFB9) F3E/2D90/6279C31D/CO829 BA/0647/FLIR_series.pdf (16 October 2024).
* [PERSON] and [PERSON] (2018) [PERSON], [PERSON] [PERSON], 2018. Mobile thermal mapping for matching of infrared images with 3D building models and 3D point clouds. Quantitative InfraRed Thermography Journal, 1-19. doi:10.1080/17686733.2018.1455129.
* [PERSON] and [PERSON] (2017) [PERSON], [PERSON], [PERSON], 2017. Camera pose refinement by matching uncertain 3D building models with thermal infrared image sequences for high quality texture extraction. ISPRS Journal of Photogrammetry and Remote Sensing, 132, 33-47. doi:10.1016/j.ispsrspirs.2017.08.006.
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], R.K., 2020. A photogrammetric approach to fusing natural colour and thermal infrared UAS imagery in 3D point cloud generation. International Journal of Remote Sensing, 41(1), 211-237. doi:10.1080/01431161.2019.1641241.
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2019. Fusion of thermal imagery with point clouds for building facade thermal attribute mapping. ISPRS Journal of Photogrammetry and Remote Sensing, 151, 162-175. doi:10.1016/j.isprispris.2019.03.010.
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2021. An optimized approach for generating dense thermal point clouds from UAV-imagery. ISPRS Journal of Photogrammetry and Remote Sensing, 182, 78-95. doi:10.1016/j.isprisps.2021.09.022.
* [PERSON] (2010) [PERSON], 2010. Erweiterte Verfahren zur geometrischen Kamerkalibrierung in der Nahbereichsphotogrammetrie (Habilitation Thesis). Deutsche Geodatische Kommission, Reihe C, Nr. 645.
* [PERSON] et al. (2013) [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], 2013. Geometric Calibration of Thermographic Cameras, in: [PERSON], [PERSON] (Eds.), Thermal Infrared Remote Sensing, Remote Sensing and Digital Image Processing. Springer Netherlands, Dordrecht, pp. 27-42. doi:10.1007/978-94-007-6639-6_2.
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON] [PERSON], 2017. Optimizing the Processing of UAV-Based Thermal Imagery. Remote Sensing, 9(5), 476. doi:10.3390/rs9050476.
* Opportunities for Very High Resolution Airborne Remote Sensing. Int Arch Photogram Reme Sens Spatial Inform Sci, 37.
* [PERSON] et al. (2022) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2022. SFM-BASED 3D RECONSTRUCTION OF HERITAGE ASSETS USING UAV THERMAL IMAGES. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLIII-B1-2022, 399-406. doi:10.5194/ispsr-archives-XLIII-B1-2022-399-2022.
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], 2014. Transforming Image-Objects into Multiscale Fields: A GEOBIA Approach to Mitigate Urban Microclimatic Variability within H-Res Thermal Infrared Airborne Flight-Lines. Remote Sensing, 6(10), 9435-9457. doi:10.3390/rs6109435.
* [PERSON] et al. (2022) [PERSON], [PERSON], [PERSON], 2022. Thermal point clouds of buildings: A review. Energy and Buildings, 274, 112425. doi:10.1016/j.enbuild.2022.112425.
* [PERSON] and [PERSON] (2006) [PERSON], [PERSON] [PERSON], 2006. Digital camera calibration methods: Considerations and comparisons. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 36(5), 266-272. doi:10.3929/ETHZ-B-000158067.
* [PERSON] et al. (2024) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2024. Application, Adaption and Validation of the Thermal Urban Road Normalization Algorithm in a European City. Workshop on Visualisation in Environmental Sciences (EnviVis). doi:10.2312/ENVIRVIS.20241135.
* [PERSON] (2024) [PERSON], 2024. Multi-sensor data fusion for autonomous flight of unmanned aerial vehicles in complex flight environments. Drone Syst. Appl., 12, 1-12. doi:10.1139/dsa-2024-0005.
* [PERSON] et al. (2023) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2023. Thermal-textured BIM generation for building energy audit with UAV image fusion and histogram-based enhancement. Energy and Buildings, 301, 113710. doi:10.1016/j.enbuild.2023.113710.
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2020. A Thermal Performance Detection Method for Building Envelope Based on 3D Model Generated by UAV Thermal Imagery. Energies 13, 6677. doi:10.3390/en13246677.
* [PERSON] (2024) [PERSON], 2024. DJI Mavic 3 has no mechanical shutter. Sensor readout speed explained. [[https://www.pix-pro.com/blog/dij-mavic-3-rolling-shutter](https://www.pix-pro.com/blog/dij-mavic-3-rolling-shutter)]([https://www.pix-pro.com/blog/dij-mavic-3-rolling-shutter](https://www.pix-pro.com/blog/dij-mavic-3-rolling-shutter)) (11 May 2024).
|
isprs
|
Automatic Generation of High-Resolution Thermal 3D Building Envelope Models exploiting UAV Imagery
|
Elia Ferrari, Jonas Meyer, Andreas Koch, Stephan Nebiker
|
https://doi.org/10.5194/isprs-archives-xlviii-2-w8-2024-155-2024
| 2,024
|
CC-BY
|
isprs/cda5b564_221b_433e_a437_ff90192d32f9.md
|
Observation of Submarine Volcanic Activities
at
the Kaitoku Seamount and the Hukutoku-Oka-no-Ba Seamount
by
the Airborne Thermal Infrared Radiometer
[PERSON] and [PERSON]
Hydrographic Department, Maritime Safety Agency, Japan
Commission VII
1. Introduction
The Izu-Ogasawara Ridge is an island arc along the Izu-Ogasawara
Trench where Pacific Plate subducts
beneath the Philippine Sea Plate.
Many active volcanic islands and submarine volcanoes exist in a line
on the Izu-Ogasawara Ridge, and
form a typical volcanic front. New
volcanoes have been discovered on
the front caused by volcanic
activities at points where volcanic
activity was not reported since the
dawn of history.
Recently, new volcanic
eruption was recognized at the sea
bottom 400 meters east from the
south point of Niasi-no-Sima
(27*15'N, 140*53'E),west ward of
the Ogasawara Islands, in May, 1973
since the dawn of history. This
eruption formed a new island called
\"Nisinosima-Sin To\" in September,
1973.
Moreover a new submarine
volcanic activity was broken out at
the Minami-Hiyosi Seamount
(23*35.3'N, 141*54.3'E) which
situated between Minami-Io Sima and
Uracas Island in August, 1975. This
activity flew out discolored water
till March, 1978. In addition,
discolored water was reported at
the Nikko Seamount (23*04.5'N,
142*18.5'E) in July, 1979, and
outflow of pumice caused by
submarine eruption was found at the
Hukuzin Seamount (21*56.0'N,
143*28.0'E) in August, 1951. The
eruptions of the Hukuzin Seamount
have often occurred since then.
Figure 1 shows the distribution of active volcanoes in the Nanpo
Syoto.
2. Kaidoku Seamount
(1) The Progress of the eruption Submarine eruption occurred at 26^07.3'N, 141^06.1^E, 75 km north of the Kita-Io Sima. As this area is a good fishing ground, many fishing boats of our country work at this ground. A fishing boat named Kaidoku-Maru (85 tons) discovered the shoal at 26^08.8^N, 141^06.6^E in April, 1927 and at 26^03.1^N, 140^56.0^E in June, 1927. Two shoals have been called the Higasi-Kaitoku-Ba and Nisi-Kaitoku-Ba, respectively. This area has no reliable report of eruption, except one eruption report at 26^00'N, 140^46^E in 1543. Consequently the Nisi-Kaitoku-Ba is supposed a submarine volcano. Accordingly, the eruption of this time at the Higasi-Kaitoku-Ba is the first confirmation of submarine volcanic activity since the dawn of history at this area. Submarine topographic data in this area is very few, however it is supposed one seamount having two summits of east and west. Therefore Maritime Safety Agency gave it a name to the Kaidoku Seamount. Figure 2 shows a topography of the Kaidoku Seamount and the location of spouting point. Contour interval is 500 meters and the cross section indicates A-A' line in the topography (vertical enlargement is 5:1). The progress of the eruption at the Kaidoku Seamount is as follows:
Date Observer State of Activities March 7 Maritime Self Defence Discovered a discolored water. Force YS-11 March 8 Maritime Safety Agency Recognized the discolored YS-11 water. // Maritime Self Defence About twenty volcanic blocks Force P3C flew out to sea surface, the steam rose into the air from the blocks. March 9 Maritime Safety Agency The temperature of spouting YS-11 point was 1.5^C higher than surrounding sea water, measured by the thermal infrared radiometer. (Photo. 1)March 12 Maritime Self Defence About ten volcanic blocks on Force P2J the sea surface and steam from the blocks.
March 13 Maritime Safety Agency The temperature of spouting YS-11 point was more than 2*C higher than surrounding sea water, measured by the thermal infrared radiometer. (Photo. 2)
March 14 Patrol Vessel \"Uraga\" Measured the position of and her heliopoter, spotuing point by [PERSON]; Maritime Safety Agency 26*07.3'N, 14!'06.1'E and took a sample of discolored water.
March 15 // Recognized the drifting of volcanic blocks and rising steam from the blocks. (Photo. 3)
March 17 // (Photo 4 and Photo 5) March 29 Maritime Safety Agency The temperature of discolored YS-11 water was 0.4*C higher than surrounding sea water, measured by the thermal infrared radiometer. (Photo 6)
(2) Observation The temperature measurements of spouting point and discolored water were made out on March 9, March 13 and March 29, 1984 by the airborne thermal infrared radiometer AGA 780 installed on an aircraft YS-11 belonging to Maritime Safety Agency. Table 1 shows a specifications of the thermal infrared radiometer AGA 780.
In case of temperature measurement using thermal infrared sensor, there needs some corrections for emissivity of the objects, for absorption of vapor of the air etc. But we made no correction because it was very difficult to get sea truth and to correct the observed data; discolored water caused by the submarine volcanic activities.
A. March 9, 1984 Figure 3 shows a digital analyzed thermal map at 1030 on March 9, 1984 (altitude 300 meters). Discolored water was spouting from under the sea water at the measurement instant. So the higher temperature part was recognized circle (diameter was about 100 meters), and its circle was more than 1.5*C higher than surrounding sea water. In the circle, temperature changed from sea water to the circle, and the thermal distribution was very complex.
Figure 4 shows a digital analyzed thermal map at 1035 on March 9, 1984 (altitude 300 meters). The circular higher temperature part was
changing to egg shape and the temperature differences between circle and sea water became smaller because of diffusion of sea water.
B. March 13, 1984
Figure 5 shows a digital analyzed thermal map at 620 on March 13, 1984 (altitude 1000 meters), and the measurement condition was just after the eruption occurred under the spouting point. So the spouting point was more than 2*C higher than surrounding sea water (because of using 2*C measurement range, data were saturatwere saturated).
Figure 6 shows a digital analyzed thermal map at 625 on March 13, 1984 (altitude 1000 meters). Thermal distribution was much different from that of figure 5 though only 5 minutes had elapsed. The higher temperature part was small and the temperature differences were only 1*C or more. Thermal distribution was monotonous in comparison with that of figure 5.
C. March 29, 1984
Figure 7 and figure 8 show digital analyzed thermal maps at 1256 and at 1303 on March 29, 1984 (altitude 700 meters each). Volcanic
Figure 3: Digital Analyzed Thermal Map at 1030 on Mar. 9, 1984
Figure 5. Digital Analyzed Thermal Map at 620 on Mar. 13, 1984
Figure 6. Digital Analyzed Thermal Map at 625 on Mar. 13, 1984
Figure 7. Digital Analyzed Thermal Map at 1256 on Mar. 29, 1984
activities of the Kaitoku Seamount became low and the spouting point was not clear (refer to Photo 6), so there was no circle showing of high temperature, which had detected on March 9 and 13. The temperature of discolored water was only 0.4\"C higher than surrounding sea water and the thermal distribution of discolored water was very monotonous.
(3) Results Measurements of thermal distribution of discolored water caused by the volcanic activities of the Kaitoku Seamount were carried out on March 9, March 13 and March 29, 1984.
Results are as follows:
- The temperature of discolored water was about 0.5\"C higher than surrounding sea water.
- The temperature of spouting point was more than 2\"C higher than surrounding sea water.
- The thermal distribution of discolored water was very complex just after the spouting, then became monotonous according to the time elapsed.
- The thermal distribution of discolored water changed fast. Especially changing was fast just after the spouting, and thermal distribution pattern of 5 minutes after the spouting entirely showed another one. So the thermal distribution might be analyzed some varieties according to the measurement time.
- There was a low temperature part adjacent to the spouting point. This meant that sea water under the sea was pulled up by the spouting discolored water.
- High temperature water by the spouting diffused like a concentric circle.
3. Hukutoku-Oka-no-Ba Seamount
(1) The progress of the eruption
The birth of a new volcanic island was recognized on 20 January,
1986 at the summit of a seamount called the Hukutoku-Oka-no-Ba Seamount. The Hukutoku-Oka-no-Ba Seamount is a submarine volcano on southern part of the Sitito-Io Sima Ridge. The Hukutoku-Oka-no-Ba locates at 24\"17.0\"N, 14\"29.1\"E; NNW 2.6 nautical miles off Minami-Io
Figure 8: Digital Analyzed Thermal Map at 1303 on Mar. 29, 1984
Sima, Ogasawara Islands. This submarine volcano formed a volcanic island two times; in November, 1904 and in January, 1914. These new islands were vanished within one year in both cases. The Hukutoku-Oka-no-Ba Seamount is one of the most active submarine volcanoes in Japan and discolored water has been found incessantly more than ten years. Figure 9 shows a topography around the Hukutoku-Oka-no-Ba Seamount surveyed by the survey vessel \"Takuyo\" in 1986. Contour interval is 100 meters and two cross sections indicate B-B' line and C-C' line each in the topography (vertical enlargement is 5:1). The progress of the new-born volcanic island at the Hukutoku-Oka-no-Ba Seamount is as follows:
Figure 9. Topography of the Io Sima to Minami-Io Sima and Location of Spouting Point
Date Observer State of Activities Jan. 19 Maritime Self Defence Proceed Discovered smoke above the Force Hukutoku-Oka-no-Ba from Io Sima. Jan. 20 Survey Vessel \"Takuyo\" Recognized a new volcanic Maritime Safety Agency island in eruption.(Photo 7) Jan. 21 Jan. 22 Patrol Vessel \"Uraga\" Recognized eruption of the Maritime Safety Agency New island came to termination. After Jan. 22, the new island became small caused by wave corrosion. Mar. 26 Maritime Safety Agency Recognized the new island YS-11 vanished.(Photo 12)
(2) Observation The temperature measurements of a new-born volcanic island were made out on January 21, January 23, January 29 and February 14, 1986 by the airborne thermal infrared radiometer AGA 780 on the YS-11.
We made no data correction because of same reason as the Kaitoku Seamount.
A. January 21, 1986 Figure 10 shows a digital analyzed thermal map at 1331 on January 21, 1986 (altitude 2300 meters). Small eruption had occurred just before the measurement, so the circular pattern in the center of the figure indicates volcanic smokes and the surface ground temperature of a new-born island, shown below and under the smokes in the figure, was lower than smokes. Temperature of smokes was 25.7 degC-M7.8 degC, surface of new island was 28.2 degC-31.8 degC and sea water was 23.9 degC.
Figure 10. Digital Analyzed Thermal Map at 1331 on Jan. 21, 1986 B. January 23, 1986 Figure 11 shows a digital analyzed thermal map at 1334 on January 23, 1986 (altitude 1000 meters). Eruption was terminated and the vent was covered with sea water, but the temperature of sea water near above the vent was 0.5 degC higher than surrounding sea water. So the volcanic activity still continued in the sea. Temperature of a new-born island was 24.2 degC-26.3 degC, disocolored water was 23.0 degC-23.6 degC and sea water was 22.7 degC.
Figure 11. Digital Analyzed Thermal Map at 1334 on Jan. 23, 1986
C. January 29, 1986 Figure 12 shows a digital analyzed thermal map at 1304 on January 29, 1986 (altitude 2400 meters). Thermal range was set to 5*C, then the thermal map showing the temperature of a new-born island was saturated because surface temperature of the island might be more than 20*C still.
Figure 13 shows a digital analyzed thermal map at 1307 on January 29, 1986 with thermal range 10*C (altitude 2400 meters). Surface temperature of a new-born island was recognized in detail but sea water temperature became monotonous and could not discriminate between discolored water and surrounding sea water.
Temperature of a new-born island was 23.8*C-28.7*C, discolored water was 23.2*C-23.5*C and sea water was 23.0*C.
1986 Jan.29 13h04m altitude 2400m thermal range 5*C
1986 Jan.29 13h07m altitude 2400m thermal range 10*C
15.5 18.0 19.0 20.0 21.0 22.0 23.0 24.0 25.44 C,
Figure 13. Digital Analyzed Thermal Map at 1307 on Jan. 29, 1986
D. February 14, 1986 Figure 14 shows a digital analyzed thermal map at 1357 on February 14, 1986 (altitude 2200 meters). Thermal distribution became monotonous. Temperature of a new-born island was 25.4^C-28.3^C, discolored water was 23.2^C and sea water was 23.0^C.
Figure 14. Digital Analyzed Thermal Map at 1357 on Feb. 14, 1986
(3) Results
Observation of volcanic activity of a new-born island were carried out using airborne thermal infrared radiometer and vertical multi-band camera installed on an aircraft YS-11 belonging to Maritime Safety Agency. Changing of the new island's topography and surface temperature are shown in figure 15.
Results are as follows:
- Period of volcanic activity was very short (only three days) and no lava erupted above sea surface was observed.
- Area of the new island was continuously decreased after the eruption terminated (figure 15).
- Surface temperature of the new-born volcanic island by the airborne thermal infrared radiometer was 30^C at the time of eruption. And after the eruptions surface temperature decreased to about 25^C.
- The temperature of discolored water was 0.2^C-1.0^C higher than surrounding sea water.
Figure 15. Changing of Topography Area and Surface Temperature
* New-born island vanished because of no lava flows above sea surface. It was assumed that lava extended just below the sea surface, because very low area washed out always by wave existed more than two weeks before the extinction of the new-born island.
4. Summary
Thermal distribution of submarine volcanic activities at the Kaitoku Seamount and the Hukutoku-Oka-no-Ba Seamount was analyzed.
Voloantic activities at the Kaitoku Seamount had continued till June, 1984. After June 9, 1984, they have not been confirmed more than three years.
Discolored water from the Hukutoku-Oka-no-Ba Seamount has been continuously observed and sometimes the summit under sea water was discerned from discolored water.
Fortunately, we got samples of floating pumice and discolored water from both seamounts in active stage. Then chemical analysis was made and some important information on submarine volcanic eruptions became clear.
We detected that the temperature of spouting point just after the eruption was more than 2 degC higher than surrounding sea water, but few minutes later its temperature decreased to 1 degC or so. Accordingly, observations must be made timely and be continued for a long time.
However, many active submarine volcanoes around Japan locate in the distance from a observation base, so it is very difficult to make continuous observation or timely observation. In addition, submarine volcanoes rarely erupt and obtained data are precious.
We will conduct an investigation on submarine volcanic activities to get precious data.
* [15]Photo 1. Photo 2. Photo 3. Discolored water from the Discolored Water from the Katioku Seamount on Mar. 15,1984 Mar. 9,1984 (Around the Mar.13,1984 (Spreading Center) Spouting Point)
* [16]Photo 4. Photo 5. Photo 6. Discolored Water from the Katioku Seamount on Mar. 17,1984 (Seamout volcanic activity on Mar. 17,1984 (Anonymous))
* [17]Photo 7. Photo 8. Photo 9. Uncolored Water from the Hukutoku-Oka-no-Ba on Mar. 18,1986 (Just after the small eruption)
* [18]Photo 10. Photo 11. New-born Island and Discolored Water on Jan. 29,1986 (Jast after Discolored Water on Jan. 23,1986 (Jast after Discolored Water on Jan.
|
isprs
|
Introduction
|
Elena Faur, Ciprian Speranza
|
https://doi.org/10.55245/energeia.2025.01
| 2,025
|
CC-BY
|
isprs/73df24d3_c8b9_4db6_9f25_fa4859092bf9.md
|
Estimating Industrial Structure Changes in China using DMSP - OLS Night-Time Light Data During 1999-2012
[PERSON]
[PERSON]
[PERSON]
Corresponding author
[PERSON]
1 School of Environment Science and Spatial Informatics, China University of Mining and Technology, Xuzhou 221116, China - [EMAIL_ADDRESS], [EMAIL_ADDRESS]
2 State Key Laboratory of Earth Surface Processes and Resource Ecology, and College of Global Change and Earth System Science, Beijing Normal University, Beijing 1008575, China - [EMAIL_ADDRESS]
3 Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, Beijing 100864, China - [EMAIL_ADDRESS]
###### Abstract
The Defense Meteorological Satellite Program's Operational Linescan System (DMSP-OLS) night-time light imagery has been proved to be a powerful tool to monitor economic development with its relatively high spatial resolution at large scales. Night-time lights caused by human activities derived from DMSP-OLS satellite imagery are widely used in socioeconomic parameter estimations and urbanization monitoring. In this paper, DMSP-OLS night-time stable light data from 1999 to 2012 are utilized to analyze inter-annual variation in GDP of per unit light intensity (_Rcap_) in China. Furthermore, _Rcap_ was compared with statistical data of the tertiary industry structure for 28 provincial regions. The results show that the provincial _Rcap_ decreased abruptly in 2001-2002, 2008-2009 and 2011-2012, which is consistent with the proportional growth of the tertiary industry in GDP. These results indicate that the changes in _Rcap_ can reflect tertiary industry structural changes in China's province-level regions.
Night-time light, GDP, Tertiary industry structure, DMSP-OLS, China, Time series +
Footnote †: Corresponding author
1
Footnote 1: Corresponding author
2
Footnote 2: Corresponding author
## 1 Introduction
In recent decades, rapid economic development, growth of energy consumption and accelerated urbanization have been occurring in China. Gross domestic product (GDP) is the basic indicator for measuring the economic development of countries and regions. The continuous development of remote sensing technology, especially the night-time light imagery derived by satellite provides an ideal data source for research on urbanization, economic activity, and population([PERSON] and [PERSON], 2017; [PERSON] et al., 2018). Some researchers have found a strong correlation between night-time light and socio-economic parameters such as GDP, population, electric power consumption([PERSON] et al., 2017; [PERSON], 2000; [PERSON] et al., 1997; [PERSON] et al., 2014; [PERSON] et al., 2010; [PERSON] et al., 2013a; [PERSON] et al., 2007; [PERSON] et al., 2017; [PERSON] et al., 2011; [PERSON] et al., 2017).
The night-time light imagies derived by Operational Linescan System (OLS) sensors carried by the Defense Meteorological Satellite Program (DMSP) provide a powerful tool to monitor economic activities at a large scale with its relatively high spatial resolution (\(1\) km\(\times\)\(1\) km)([PERSON] et al., 2014). DMSP-OLS was designed for meteorological monitoring and detecting clouds under moonlight. However, the photomultiplier tube (PMT) carried on the sensor is able to detect weak near-infrared radiation on the surface([PERSON] et al., 1997). Therefore, night-time light data are used as an effective proxy of socioeconomic activity, and these data are an ideal data source for investigating economic activity and energy consumption. [PERSON] et al.([PERSON] et al., 1997) conducted a regression analysis of GDP, population, electric power consumption, and night-time light for 21 countries based on DMSP-OLS night-time light data. The results show that there is a strong correlation between night-time light and human activities. Then, the correlation was confirmed for other national areas([PERSON], 2000; [PERSON] et al., 2001; [PERSON] et al., 2016). Further studies show that night-time light data can be used to estimate socio-economic parameters such as GDP, population and electric power consumption at the sub-national level([PERSON] et al., 2017; [PERSON] et al., 2005; [PERSON] et al., 2010; [PERSON] et al., 2007; [PERSON] et al., 2014). The studies above were carried out on single-year night-time light data at national and sub-national scales. [PERSON] et al.([PERSON] and [PERSON], 2010) monitored the variation trend for electric power consumption in Australia using time series night-time light data, but they neglected the correction of incompatibility and discontinuity of time series night-time light data. [PERSON] et al.([PERSON] et al., 2012) combined population data to estimate provincial-level electric power consumption from 1995 to 2009 based on corrected night-time light data. However, these studies did not make a distinction for socio-economic parameters of different industries. [PERSON] et al.([PERSON] et al., 2013) discussed numerous factors affecting the relationship between night-time light and GDP on global scales from 1995 to 2009 and concluded that agriculture was approximately 25.4% of total light consumption. [PERSON] et al.([PERSON] et al., 2012) calculated GDP generated by the tertiary industry utilizing night-time light, and they considered that night-time light was representative of human economic activity especially in the service industry. These studies mainly established correlations between night-time light and socioeconomic parameters for estimation, spatialization and dynamic analysis. However, they did not combine night-time light with socioeconomic parameters.
The intensity of night-time light generally refers to the intensity of the regional light and previous studies have shown that the total digital number (DN) values of night-time light imagies in a region have a strong positive correlation with GDP([PERSON] et al., 2017; [PERSON] et al., 2006; [PERSON] et al., 1997; [PERSON] et al., 2009; [PERSON] et al., 2015; [PERSON] et al., 2012; [PERSON] et al., 2017; [PERSON] et al., 2014). However, few studies focus on the analysis of changes in industrial structure by combining time-series night-time light with socio-economic parameters. This topic provides a novel idea for investigating the industrial structure via combining time-series night-time light data with GDP. Night-time light, a proxy of human activities([PERSON] et al., 2017), is related to many factors such as economy, urbanization and energy consumption. Moreover, the industrial structure, a component of the economy, is related to night-time light. In China, the main sources of light produced by urban streets, business districts, residential areas and the development of urban streets are closely related to the city's tertiary industry (service industries and tourism mainly). We assume that the tertiary industry in a region produces more intense or wide light when the intensity of night-time light increases rapidly relative to GDP in one year. Therefore, we considered that the proportion of the tertiary industry increases when the province-level _Rcap_ decreases abruptly during the year. Importantly, the assumption in this paper is unidirectional. When the proportion of provincial tertiary industry increases, _Rcap_ does not always decrease. However, the growth in proportion of the tertiary industry does not always result in light intensity having a rapid increase relative to GDP.
We obtained the spatiotemporal distribution of night-time light intensity firstly. Then, the variation trend of industrial structure at province-level across China based on long-time series DMSP-OLS night-time light data was analyzed, and the proportion of the provincial tertiary industry is compared with the variation trend of _Rcap_ in this paper.
## 2 Materials and Methods
### Data sources
The version 4 DMSP-OLS night-time stable light products from 1999 to 2012 were obtained from the National Oceanic and Atmospheric Administration's National Geophysical Data Center (NGDC). The DMSP satellite orbits earth every 101 minutes at an altitude of 833 km above the surface of the earth and passes the same location twice a day. The reference of night-time stable light imageries is WGS-84 covering the range from -180- to 180- longitude and -65- to 75- latitude([PERSON] et al., 2010). The study area in this paper contains provincial administrative regions in China. It is difficult to obtain statistical data for Hong Kong, Macao and Taiwan. At the same time, the statistical data are inaccurate and lacking owing to the underdeveloped economies of Tibet, Qinghai and Ningxia. Therefore, we chose 28 provinces in China excluding Hong Kong, Macao, Taiwan, Tibet, Qinghai and Ningxia.
The administrative boundary provided by the Database of Global Administrative Areas (GADM). GDP and industrial proportion at the province level come from the China Statistical Yearbook (2000-2013) released by National Bureau of Statistics of the People's Republic of China. Detailed data sources are shown in Table 1.
### Inter-calibration
The night-time stable light data used in this paper are global imagery composites, so we obtained the imagery of China firstly. The original data are in the WGS-84 coordinate system, and the footprint of 30 arc-second grids decreases as latitude increases. All data were projected into the Albears Equal Area Projection to avoid the impact of area distortion and resampled to a spatial resolution of 1 km\(\times\)l km (cell size).
The imageries adopted in this article are from 1999 to 2012 and contain a total of 23 night-time light imageries taken from five different sensors (F12, F14, F15, F16 and F18). The DN values of the imageries collected vary from one satellite to another, as well as from one year to another for the same satellite due to changes in ground conditions or gain values of the sensors([PERSON], 2008).
Invariant region method was adopted to correct the night-time stable light imageries, and the imagery collected by the F12 sensor in 1999 was selected as the reference imagery. The method selected Sicily, Italy, as invariant region. A quadratic regression equation was established to adjust the DN value of pixels in the original imagery and match with the reference imagery. The inter-calibration equation obtained by this method was used to correct all the imageries. The inter-calibration equation is as shown in Equation 1:
\[DN_{\text{cal}}=a+b\times DN+c\times DN^{2} \tag{1}\]
where \(DN\) indicates the DNs in the original imagery; \(DN_{\text{cal}}\) indicates the DN values after inter-calibration; and \(a\), \(b\) and \(c\) indicate parameters in the quadratic regression equation.
### Incompatibility and discontinuity correction
The night-time light data possess a single DN value for each pixel in some years, such as F16 2008, F16 2009 and F18 2012. However, there are different DN values for the same pixel in some years such as 1994 and 1997-2007. In short, a large number of inconsistent bright-value pixels are present in the imageries of different sensors in the same year. The value of pixels obtained from the same year for different sensors should be consistent in number and location. The night-time light imageries used are annual average products in this paper, so when a pixel value is 0, we can consider that no light is generated in the region. Therefore, when performing imagery operations on night-time light imageries acquired by different sensors in the same year, as both DN values are 0, the two values are averaged; if one of the two DN value is 0, the final value is 0. Corrections between satellites in the same year see following equation:
\[DN_{\text{i,n}}=\begin{cases}\left(DN_{\text{i,n}}+DN_{\text{i,n}}\right)/2 &\left(DN_{\text{i,n}}\
eq DN_{\text{i,n}}\
eq 0\right)\\ 0&\left(DN_{\text{i,n}}=0\right)\text{ or }\left(DN_{\text{i,n}}=0\right) \end{cases} \tag{2}\]
where \(DN_{\text{i,n}}\) is DN value of the \(i\)th lit pixel of intra-annual composite in the nth year; and \(DN_{\text{i,n}}\) and \(DN_{\text{i,n}}\) are DN values of the \(i\)th lit pixel from two NSL data in the nth year respectively. According to the characteristics of night-time light in developing countries, China, which is one of the fastest-growing developing countries in the world, has enjoyed a period of steady development over the past two decades. Therefore, we further assumed that lit pixels detected in earlier night-time light imageries should be maintained in the later night-time light imageries, and DN values of lit pixels detected in an earlier night-time light image should not be greater than their DN values in the later imagery based on the premise of China's development. [PERSON] et al.([PERSON] et al., 2012) extracted urban built-up areas based on the
\begin{table}
\begin{tabular}{c c} \hline
**Data** & **Source** \\ \hline GDP, & National Bureau of Statistics of \\ Industrial & People’s Republic of China \\ proportion & ([[http://www.stats.gov.cn/](http://www.stats.gov.cn/)]([http://www.stats.gov.cn/](http://www.stats.gov.cn/))) \\ Administrative & GADM \\ boundary & ([[http://www.gadm.org/country](http://www.gadm.org/country)]([http://www.gadm.org/country](http://www.gadm.org/country))) \\ DMSP-OLS & NGDC \\ Stable light & [[https://www.ngdc.noaa.gov/eog/dmsp](https://www.ngdc.noaa.gov/eog/dmsp)]([https://www.ngdc.noaa.gov/eog/dmsp](https://www.ngdc.noaa.gov/eog/dmsp)) \\ data & /downloadV4 composites.html \\ \hline \end{tabular}
\end{table}
Table 1: Data source list above assumptions. DN values of multi-year imagery were corrected according to following equation.
\[DN_{\mathit{Ln}}=\begin{cases}\mathit{DN}_{\mathit{Ln}-1}&\left(\mathit{DN}_{ \mathit{Ln}-1}>\mathit{DN}_{\mathit{Ln}}\right)\\ \mathit{DN}_{\mathit{Ln}}&\mathit{others}\end{cases} \tag{3}\]
where \(\mathit{DN}_{\mathit{Ln}}\) and \(\mathit{DN}_{\mathit{Ln}-1}\) are DN values of the \(\mathit{i}\)th lit pixel of night-time light data in the nth and the n-1 th years, respectively.
### Analysis of time series night-time light and statistical data
The unlit regions outside city and water areas were affected by the urban light. Therefore, there are DN values larger than 0 for until area in the time series DMSP-OLS imageries. [PERSON] et al.([PERSON] et al., 2007) selected a DN value threshold of 30 for night-time light imagies to delimit urban extents. [PERSON] et al.([PERSON] et al., 2012) divided China into eight economic regions, and determined the optimal thresholds, which are 27 - 62, for each economic region. We believe these thresholds are too high for China, which is a low-urbanized developing country. However, GDP comes from not only cities, but also the countryside. Those large thresholds will ignore light information from small cities and countryside, which are also contributor to GDP when the threshold is too large. The blooming effects of the imageries will be hard to minimize and some regions without light are also counted when the selected threshold is too small. For the study of [PERSON] et al.([PERSON] et al., 2012), the authors found that 10 is the maximal DN value that can delimit all the county extents in Qinghai, which is the most economically underdeveloped province in China. Therefore, a DN value threshold of 10 was selected to minimize the effects of blooming. According to Equation 2, to minimize the effects of blooming:
\[DN_{\mathit{adj}}=\begin{cases}0&\mathit{DN}<10,\\ \mathit{DN}&\mathit{DN}\geq 10,\end{cases} \tag{4}\]
where \(\mathit{DN}_{\mathit{adj}}\) indicates the DN values in night-time imageries after adjusting, and \(\mathit{DN}\) indicates the DN values in the original imageries.
We combined GDP and night-time light data to obtain GDP of per unit light intensity. We assumed that the tertiary industry in the region has been developed to produce more intense or wide lights when the intensity of night-time light increases rapidly relative to GDP in one year. We analyzed the time series province-level GDP of per unit light intensity to investigate the variation trend of the tertiary industry structure in different provinces. \(\mathit{RGOP}\) was obtained according to following equation.
\[R_{\mathit{GDP}}=\frac{\mathit{GDP}_{\mathit{L}}}{\mathit{DN}_{\mathit{L}}} \tag{5}\]
where \(\mathit{RGOP}\) is GDP of per unit light intensity, \(\mathit{GDP}\) indicates \(\mathit{i}\)th provincial GDP, and \(\mathit{DN}\) indicates sum of DN of the \(\mathit{i}\)th province. We investigated the correlation between variations of the proportion of the tertiary industry and \(\mathit{RGOP}\) based on comparative analysis of time series \(\mathit{RGOP}\) and the annual proportion of the tertiary industry in each province.
## 3 Results and Discussion
### The corrected night-time light products
There are two obvious flaws in the original night-time light data: DN value discontinuity in different years; and the incompatibility between two night-time light imageries obtained by different sensors in the same year. Figure 1 shows the change in night-time light total DN value taken by five different sensors during the period of 1999 to 2012 in China. We determined that the night-time light imagery captured at F14 from 1999 to 2002 is dimmer than other two sensors. In addition, the total value of F15 in 2003, F16 in 2005 and 2008, and F18 in 2011 had a downward trend. The former may be caused by different gain values of different sensors, and the latter may be caused by sensor degradation.
After the correction, as shown in Figure 2, we can see that the total DN value of night-time light imagies obtained by different sensors are close to each other in the same year, and the effect caused by sensor degradation had been reduced. This result is in line with the development of China, which was a rapidly developing country from 1999 to 2012. After the correction, night-time light imagery data of the long time series are comparable, and each period of the imageries weakened the DN value saturation.
### Spatiotemporal distribution of DN
Night-time light, which is a proxy of human activity, indicates the degree of urbanization and economic development. The studies have shown that the sum of province-level DN values has a strong positive correlation with province-level GDP. We assumed that DN values in night-time light imagies products are also constantly increasing, which means that the DN value of each pixel the following year should be greater than or equal to the DN value of the previous year with the continuous development of China's economy. From Figure 3, DN values in most provinces maintain a relatively steady growth. In terms of space, the provinces with larger DN sums are concentrated in the eastern coastal provinces, followed by the central and
Figure 1: Sum of DNs in China during 1999-2012 from five different satellites
Figure 2: inter-calibration sum of DNs in China during 1999-2012 from five different satellites
northeastern regions, and the DN sums in the western region are generally small. In terms of time, the increment of DNs varied from province to province during 1999-2012.
As shown in Figure 4, _RGEP_ of Jiangsu, Henan, Hubei, Hunan and Anhui provinces decreased abruptly from 2001 to 2002 indicating that DNs had a larger increase than GDP in 2002, while the proportion of the tertiary industry increased (Table 2). However, it is difficult to judge variation in the tertiary industrial structure from GDP and DN in 2001-2002 (Figure 4). As the first year of the Tenth Five-Year Plan in 2001, the government made strategic adjustments to the economic structure. The main tasks were optimizing the industrial structure and comprehensively raising the level and efficiency of agriculture, industry and service industries. Based on this background, the five provinces each province. Among them, DN values of Jiangsu in 2001-2002; Sichuan, Hebei, Heilongjiang and Liaoning in 2008-2009; and Tianjin in 2009-2010 increased significantly. The reason for these increases is probably that tertiary industries in these provinces were developed and produced more intense or wider ranges of lights.
In addition, the proportion of the tertiary industry of Jilin did not increase (38% -37.9%) when DN values increased significantly in 2008-2009; DN values of Henan also increased significantly from 2005 to 2006 with the GDP maintaining at relatively steady growth. However, the proportion of the tertiary industry did not increase from 2005 (30%) to 2006 (29.8%); similarly, DN values of Shanxi increased significantly in 2003-2004 with a steady increase in GDP for the corresponding year. However, the proportion of the tertiary industry decreased significantly in Shanxi province from 2003 (34.7%) to 2004 (32.2%). In addition, DN values in Anhui increased significantly from 2009 to 2010, but the proportion of the tertiary industry declined (36.4% - 33.9%). Therefore, a significant increase in DN values of provincial-level night-time light does not represent an increase in the tertiary industry proportion. It is difficult to obtain the variation trend of the tertiary industry structure utilizing the provincial-level DN values individually.
### Inter-annual variation in _RGEP_ and industrial structure
In this paper, we analyzed the inter-annual variation in _RGEP_ in each province (Figure 4). The study time range started from 1999, and we explored the abrupt point in the _RGEP_ interannual variation. However, we did not explore and explain the phenomenon when _Rcoep_ started to decrease from 1999 to 2000, owing to the trend of _Rcoep_ before 1999 that was not obtained.
Figure 4: Inter-annual variation in _RGEP_ for 28 provinces in China during 1999-2012
As shown in Figure 4, _RGEP_ of Jiangsu, Henan, Hubei, Hunan and Anhui provinces decreased abruptly from 2001 to 2002 indicating that DNs had a larger increase than GDP in 2002, while the proportion of the tertiary industry increased (Table 2). However, it is difficult to judge variation in the tertiary industrial structure from GDP and DN in 2001-2002 (Figure 4). As the first year of the Tenth Five-Year Plan in 2001, the government made strategic adjustments to the economic structure. The main tasks were optimizing the industrial structure and comprehensively raising the level and efficiency of agriculture, industry and service industries. Based on this background, the five provinces each province. Among them, DN values of Jiangsu in 2001-2002; Sichuan, Hebei, Heilongjiang and Liaoning in 2008-2009; and Tianjin in 2009-2010 increased significantly. The reason for these increases is probably that tertiary industries in these provinces were developed and produced more intense or wider ranges of lights.
In addition, the proportion of the tertiary industry of Jilin did not increase (38% -37.9%) when DN values increased significantly in 2008-2009; DN values of Henan also increased significantly from 2005 to 2006 with the GDP maintaining at relatively steady growth. However, the proportion of the tertiary industry did not increase from 2005 (30%) to 2006 (29.8%); similarly, DN values of Shanxi increased significantly in 2003-2004 with a steady increase in GDP for the corresponding year. However, the proportion of the tertiary industry decreased significantly in Shanxi province from 2003 (34.7%) to 2004 (32.2%). In addition, DN values in Anhui increased significantly from 2009 to 2010, but the proportion of the tertiary industry declined (36.4% - 33.9%). Therefore, a significant increase in DN values of provincial-level night-time light does not represent an increase in the tertiary industry proportion. It is difficult to obtain the variation trend of the tertiary industry structure utilizing the provincial-level DN values individually.
### Inter-annual variation in _RGEP_ and industrial structure
In this paper, we analyzed the inter-annual variation in _RGEP_ in each province (Figure 4). The study time range started from 1999, and we explored the abrupt point in the _RGEP_ interannual variation. However, we did not explore and explain the phenomenon when _Rcoep_ started to decrease from 1999 to 2000, owing to the trend of _Rcoep_ before 1999 that was not obtained.
economic regions of China come from National Bureau of Statistics of the People's Republic of China). The proportion of the tertiary industry increased when the \(R_{GDP}\) decreased abruptly whether in the eastern coastal provinces with higher levels of economic development or the central provinces and the western provinces.
The \(R_{GDP}\) of Jiangxi decreased abruptly in 2001-2002 and kept decreasing in the following year (Figure 4), but the proportion of the tertiary industry did not increase (Table 2). The \(R_{GDP}\) of Jiangxi has significant variation in 2001-2003 compared with other provinces, while the GDP of Jiangxi maintained a relatively steady growth in line with other provinces. In addition, the DN of the Jiangxi had significant variation, which may be caused by the error of night-time light data or industrial policy.
The \(R_{GDP}\) of Guizhou and Guangxi decreased abruptly in 2002-2003, then it maintained steady growth. However, the proportion of their tertiary industry did not increase (Table 2). Economic growth in both provinces was affected in 2002-2003: Guizhou's GDP growth rate during this period (14.71%) was close to the growth rate of previous year (14.61%) and much lower than the growth rate of next following year (17.63%); Guangxi's GDP during this period was far less than the growth rate in the previous year (13.11%) and the following year (21.71%). The \(R_{GDP}\) of Guizhou and Guangxi declined by the impact of GDP growth.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**Province** & \multicolumn{2}{c}{**Proportion of tertiary industry (\%)**} & \multicolumn{2}{c}{\(R_{GDP}\) **(10\({}^{3}\))**} \\ \cline{2-5}
**(municipality)** & **2008** & **2009** & **2008** & **2009** \\ \hline Xinjiang & 33.9 & 37.1 & 7.85 & 7.27 \\ Hebei & 33.2 & 35.2 & 18.68 & 17.24 \\ Liaoning & 34.5 & 38.7 & 26.73 & 25.17 \\ Inner Mongolia & 33.3 & 38 & 20.68 & 19.83 \\ Heilongjiang & 34.4 & 39.3 & 16.97 & 13.30 \\ Gansu & 39.1 & 40.2 & 18.37 & 17.35 \\ Yunnan & 39.1 & 40.8 & 16.52 & 15.80 \\ Sichuan & 34.8 & 36.7 & 38.19 & 36.07 \\ Shaanxi & 32.9 & 38.5 & 17.28 & 17.09 \\ Shanxi & 34.2 & 39.2 & 10.81 & 10.07 \\ Jilin & 38 & 37.9 & 25.46 & 21.82 \\ \hline \hline \end{tabular}
* The \(R_{GDP}\) of Xinjiang, Hebei, Liaoning, Inner Mongolia, item provinces had a greater growth than the growth of GDP in the Jiangxi had significant variation, which may be caused by the error of night-time light data or industrial policy.
* The \(R_{GDP}\) of Guizhou and Guangxi decreased abruptly in 2002-2003, then it maintained steady growth. However, the proportion of their tertiary industry did not increase (Table 2). Economic growth in both provinces was affected in 2002-2003: Guizhou’s GDP growth rate during this period (14.71%) was close to the growth rate of previous year (14.61%) and much lower than the growth rate of next following year (17.63%); Guangxi’s GDP during this period was far less than the growth rate in the previous year (13.11%) and the following year (21.71%). The \(R_{GDP}\) of Guizhou and Guangxi declined by the impact of GDP growth.
* The \(R_{GDP}\) of Guizhou and Guangxi decreased abruptly in 2002-2003: Guizhou's GDP growth rate during this period (14.71%) was close to the growth rate of previous year (14.61%) and much lower than the growth rate of next following year (17.63%); Guangxi’s GDP during this period was far less than the growth rate in the previous year (13.11%) and the following year (21.71%). The \(R_{GDP}\) of Guizhou and Guangxi declined by the impact of GDP growth.
* The \(R_{GDP}\) of Guizhou and Guangxi decreased abruptly in 2002-2003: Guizhou's GDP growth rate during this period (14.71%) was close to the growth rate of previous year (14.61%) and much lower than the growth rate of next following year (17.63%); Guangxi's GDP during this period was far less than the growth rate of previous year (17.63%).
* The \(R_{GDP}\) of Guizhou and Guangxi decreased abruptly in 2002-2003: Guizhou's GDP growth rate during this period (14.71%) was close to the growth rate of previous year (14.71%) and much lower than the growth rate of previous year (14.
2009, and the proportion of tertiary industries increased (Table 3). Ten provinces are distributed in various economic regions of China, Hebei is located in the eastern region, Shanxi is located in the central region, Liaoning and Heilongjiang are located in the northeast region, and the others are located in the western region. It is difficult to judge the variation in the industrial structure from GDP and DN of the ten provinces. China was affected by the global economic crisis caused by the subprime crisis of the United States in 2008, and GDP growth of most provinces slowed down. However, the economic crisis not only posed a challenge but also provided opportunities for industrial restructuring and upgrades. When responding to the economic crisis, China seized the opportunity to upgrade its industrial structure and actively develop its tertiary industry. In addition, the _Rcap_ of Jilin decreased abruptly in 2008-2009; however, the proportion of the tertiary industry did not increase (Table 3). DNs in Jilin substantially increased from 2008 to 2009, while GDP grew more slowly. The night-time light of Jilin substantially increased in 2008-2009, which may have been caused by the development of infrastructure (streetlights, etc.), but the tertiary industry had not been developed.
## 4 Error and Limitations
We found that the process of inter-calibration may generate some errors. We used the invariant region method to inter-calibrate raw night-time light imageries because of the lack of on-board calibration for DMSP-OLS. The method assumed that night-time lights in Sicily, Italy, did not experience any significant changes, which seems to be impossible. The sum of the lights from the two satellites were not coincident (Figure 2). Future work will consider improving the methods to reduce the impact of the lack of on-board calibration.
In addition, the single threshold of 10 for controlling 'blooming effects' may lead to some errors. [PERSON] et al. found that threshold of 10 was better than threshold of 20 and 30 to delimit urban extens([PERSON] et al., 2012). To reduce the impact of a single threshold, future work can determine the threshold accurately utilizing multisource data for each year and each province.
Another kind of error may come from the discontinuity correction. Regions that are dependent on resources and heavy industry may experience a decrease in night-time light intensity. However, night-time stable light data are the annual average product with a spatial resolution of 1 km, we believed that pixels with a decrease in night-time light intensity were only a small part.
## 5 Conclusions
The prosperity of tertiary industry is an important feature of the modern economy. Therefore, it is significant to understand variation trends of the tertiary industrial structure. This paper analyzed spatiotemporal distributions in night-time light intensity and the variation trend of industrial structures at the province-level across China based on long-time DMSP-OLS night light data from 1999 to 2012.
In this paper, we assumed that the tertiary industry has been developed to produce more intense or wide lights in a region when night-time light increases rapidly relative to GDP in one year. Therefore, we considered that the tertiary industry structure increased when the _Rcap_ of a province-level decreased abruptly in a year. The results show that inter-annual variation in _Rcap_ can reflect the variation of the tertiary industry structure in the provincial region. Among them, the _Rcap_ of Jiangsu, Anhui, Hubei, Hunan and Henan in 2001-2002; Xinjiang, Heilongjiang, Liaoning, Inner Mongolia, Heilongjiang, Gansu, Yunnan, Sichuan, Shanxi and Shanxi in 2008-2009; and Inner Mongolia and Gansu in 2011-2012 decreased abruptly, and the proportion of the tertiary industry increased. In addition, the proportion of the tertiary industry in four provinces did not increase when the _Rcap_ of these provinces decreased abruptly, and we analyzed the reasons above. The 4 provinces only account for a small part compared with the other 17 provinces. Therefore, we believed the assumption above was valid.
This article explored variation trends of the tertiary industry structure via night-time light data obtained by satellites in a new perspective and provided a reference for the economic development and industrial structure of province-level regions of China. The Visible Infrared Imaging Radiometer Suite (VIIRS) sensor on the Suomi National Polar-orbiting Partnership (NPP) Satellite launched in October 2011 has become a new satellite used to monitor night-time light([PERSON] et al., 2013b). Future study can apply this method to NPP-VIIRS satellite.
## Acknowledgements
This study was supported by the Fundamental Research Funds for the Central Universities (2015 XXKS049).
## References
* [PERSON] et al. (2010) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2010. Development of a 2009 stable lights product using DMSP-OLS data. Proceedings of the Asia-Pacific Advanced Network 30.
* [PERSON] and [PERSON] (2017) [PERSON], [PERSON], [PERSON], [PERSON], 2017. Advances in using multitemporal night-time lights satellite imagery to detect, estimate, and monitor socioeconomic dynamics. _Remote Sensing of Environment_ 192, 176-197.
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2017. The Suitability of Different Nighttime Light Data for GDP Estimation at Different Spatial Scales and Regional Levels. _Sustainability_ 9, 305.
* [PERSON] (2008) [PERSON], C.N.H., 2008. CIESIN Thematic Guide to Night-time Light Remote Sensing and its Applications.
* [PERSON] et al. (2006) Doll, C.N.H., [PERSON], [PERSON], 2006. Mapping regional economic activity from night-time light satellite imagery. _Ecol Econ_ 57, 75-92.
* [PERSON] et al. (2005) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2005. From wealth to health: modelling the distribution of income per capita at the sub-national level using night-time light imagery. _International Journal of Health Geographics_ 4, 5.
* [PERSON] (2000) [PERSON], 2000. Night-Time Imagery as a Tool for Global Mapping of Socioeconomic Parameters and Greenhouse Gas Emissions. Ambio A Journal of the Human Environment 29, 157-162.
* [PERSON] et al. (1997) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 1997. Relation between satellite observed visible-near infrared emissions, population, economic activity and electric power consumption. _International Journal of Remote Sensing_ 18, 1373-1379.
* [PERSON] et al. (2001) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2001. Night-time lights of the world. 1994-1995. _Isprus Journal of Photogrammetry & Remote Sensing_ 56, 81-99.
* [PERSON] et al. (2009) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2009. A Fifteen YearRecord of Global Natural Gas Flaring Derived from Satellite Data. _Energies_ 2, 595-622.
* [1] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2015. Generating the Nighttime Light of the Human Settlements by Identifying Periodic Components from DMSP/OLS Satellite Imagery. _Environmental Science & Technology_ 49, 10503-10509.
* [2] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2012. GDP Spatialization in China Based on Nighttime Imagery. _Journal of Geo-Information Science_ 14, 128-136.
* [3] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2014. Modeling the spatiotemporal dynamics of electric power consumption in Mainland China using saturation-corrected DMSP/OLS nighttime stable light data. _International Journal of Digital Earth_ 7, 993-1014.
* [4] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2010. Estimating energy consumption from nighttime DMPS/OLS imagery after correcting for saturation effects. _International Journal of Remote Sensing_ 31, 4443-4458.
* [5] [PERSON], X., [PERSON], L., [PERSON], X., 2013a. Detecting Zimbabwe's Decadal Economic Decline Using Nighttime Light Imagery. _Remote Sensing_ 5, 4551-4570.
* [6] [PERSON], X., [PERSON], [PERSON], [PERSON], X., [PERSON], [PERSON], 2013b. Potential of NPP-VIIRS Nighttime Light Imagery for Modeling the Regional Economy of China. _Remote Sensing_ 5, 3057-3081.
* [7] [PERSON], [PERSON], [PERSON], C., [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2012. Extracting the dynamics of urban expansion in China using DMSP-OLS nighttime light data from 1992 to 2008. _Landscape & Urban Planning_ 106, 62-72.
* [8] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2018. Expansion Analysis of Yangtze River Delta Urban Agglomeration Using DMSP/OLS Nighttime Light Imagery for 1993 to 2012. _ISPRS International Journal of Geo-Information_ 7, 52.
* [9] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2012. Quantitative estimation of urbanization dynamics using time series of DMSP/OLS nighttime light data: A comparative case study from China's cities. _Remote Sensing of Environment_ 124, 99-107.
* [10] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2017. Transferability of Economy Estimation Based on DMSP/OLS Night-Time Light. _Remote Sensing_ 9, 786.
* [11] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], C., [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2016. Detecting spatiotemporal dynamics of global electric power consumption using DMSP-OLS nighttime stable light data. _Applied Energy_ 184, 450-463.
* [12] [PERSON], [PERSON], [PERSON], 2007. Estimation of Gross Domestic Product at Sub-national Scales Using Nighttime Satellite Imagery, _International Journal of Ecological Economics and Statistics_, pp. 5-21.
* [13] [PERSON], [PERSON], 2010. The use of night-time lights satellite imagery as a measure of Australia's regional electricity consumption and population distribution. Taylor & Francis, Inc.
* [14] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2017. Estimation of Urban Population Dynamics Using DMSP-OLS Night-Time Lights Time Series Sensors Data. _IEEE Sensors Journal_ 17, 1013-1020.
* [15] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2013. Exploring factors affecting the relationship between light consumption and GDP based on DMSP/OLS nighttime satellite imagery. _Remote Sensing of Environment_ 134, 111-119.
* [16] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2014. Estimation of Gross Domestic Product Using Multi-Sensor Remote Sensing Data&58; A Case Study in Zhejiang Province, East China. _Remote Sensing_ 6, 7260-7275.
* [17] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2017. A Map Spectrum-Based Spatiotemporal Clustering Method for GDP Variation Pattern Analysis Using Nighttime Light Images of the Wuhan Urban Agglomeration. _International Journal of Geo-Information_ 6, 160.
* [18] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2011. Net primary production and gross domestic product in China derived from satellite imagery. _Ecol Econ_ 70, 921-928.
* [19] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2012. Mapping spatio-temporal changes of Chinese electric power consumption using night-time imagery. Taylor & Francis, Inc.
* [20] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2014. Correcting Incompatible DN Values and Geometric Errors in Nighttime Lights Time-Series Images. _IEEE Transactions on Geoscience & Remote Sensing_ 53, 2039-2049.
* [21] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2017. Modeling the Spatiotemporal Dynamics of Gross Domestic Product in China Using Extended Temporal Coverage Nighttime Light Data. _Remote Sensing_ 9, 626.
|
isprs
|
ESTIMATING INDUSTRIAL STRUCTURE CHANGES IN CHINA USING DMSP – OLS NIGHT-TIME LIGHT DATA DURING 1999–2012
|
X. Han, G. Tana, K. Qin, H. Letu
|
https://doi.org/10.5194/isprs-archives-xlii-3-w5-9-2018
| 2,018
|
CC-BY
|
isprs/2a37dff4_f91b_4833_8561_dbeab40b3ba1.md
|
Effect of the distances of public health facilities from the nearest major roads on skilled deliveries conducted in Kisumu County, Kenya
[PERSON]
1 Kisumu County Department of Health, [PERSON]S]
[PERSON]
1 Kisumu County Department of Health, [PERSON]S]
[PERSON]
1 Kisumu County Department of Health, [PERSON]S]
[PERSON]
2 College of Health Sciences Jomo Kenyatta University, Main Campus Juja, Kiambu, Kenya, [PERSON]SS]
[PERSON]
3 HIGDA, Morning Side Office Park, Nairobi, Kenya [PERSON]SS]
[PERSON]
1 Kisumu County Department of Health, [PERSON]S]
*[PERSON]
2 College of Health Sciences Jomo Kenyatta University, Main Campus Juja, Kiambu, Kenya, [PERSON]SS]
3 HIGDA, Morning Side Office Park, Nairobi, Kenya [PERSON]SS]
###### Abstract
The project sought to describe health facility accessibility and its effect on skilled delivery in Kisumu in line with the pillars of UHC. Proportion of skilled deliveries for 156 public health facilities conducted in 2016 was mined from DHIS2. Healthcare physical services accessibility was represented using a 5 km radius fixed distance buffering around health facilities and the health facilities' distances to the nearest major roads. Simple linear regression was then done between distances of the health facilities to their nearest major roads and skilled deliveries conducted in the health facilities. The mean skilled delivery was 42.5 % (Median =45.8%, Range 0 to 358% and IQR=48.6%). There exist 4 pockets of underserved areas in Nyando, Nyakach and Muhoroni sub counties measuring 21 kms\({}^{2}\), 52 kms\({}^{2}\), 60 kms\({}^{2}\), 65 kms\({}^{2}\) and 94 kms\({}^{2}\) respectively. Distance from the nearest road to skilled deliveries conducted, showed the R'value was 0.02. The study found out that underserved areas are located away from major roads. The mean skilled delivery was lower than the national of 65%. However some facilities exceeded 100%. This can be explained by in referrals that cause such facilities to exceed their projected workloads. The distance of a health facility to the nearest major road is inversely proportional to the skilled deliveries conducted meaning that the further a health facility is from a major road, lesser the skilled deliveries conducted in that facility and vice versa. However, this model is weak in establishing such an effect because of the low R\({}^{2}\) value. In conclusion, there are pockets of underserved areas in Kisumu and distance of health facilities from the nearest major road does not significantly affect the conduct of skilled deliveries in Kisumu County.
Footnote †: Corresponding Author
## 1 Introduction
One of sustainable development goals pillars is the attainment of universal health care (UHC)(WHO, 2017). In 2018, Kisumu County was selected by the Government of Kenya as one of the four counties where universal health care will be piloted. Kisumu County is one of the 47 counties in Kenya, and comprises of 7 sub counties namely Seme, Kisumu West, Kisumu East, Kisumu Central, Nyando, Muhoroni and Nyakach. Its population is projected to reach 1,163,260 in 2018 with males and females accounting for 49% and 51% respectively and an annual growth rate of 2.8%(County, 2018).
Key UHC index tracer indicators include the proportion of skilled deliveries conducted and health facilities access(WHO, 2017).
As Kisumu County journeys towards piloting UHC, it is important to understand how the proportion of skilled deliveries conducted is affected by the location of public health facilities. One way in by which access to health facilities can be measured is by measuring their distances to the nearest roads. In determining physical access to healthcare services by using distance from the facility - either the straight-line distance, the distance patients have to travel or the distance travelled by patients in a given time, it is assumed that people will visit the closest facility, which implies that distance is the overriding factor influencing attendance([PERSON], 2014).
## 2 Background
Skilled delivery is defined as the conduct of delivery in a health facility and/or by a trained health professional. Skilled delivery is known to improve both maternal and fetal outcomes. Approximately 1000 women die each day worldwide from pregnancy related causes, 99% of them in developing countries and more than 50% in sub-Saharan Africa(WHO, 2008).In a study conducted in Burkina Faso, the distance to a health centre was a major determinant of institutional delivery: three quarters (76.7%) of births within 1 km of the health centre took place in a facility, compared with a mere fifth (18.5%) for births further than 10 km from the health centre ([PERSON], 2008). It is crucial that research and policy focus on health system determinants and in particular address geographic and quality barriers to obstetric care([PERSON], 2011). This study focuses on using health facilities' distance from the nearest major road as a proxy for accessibility and how this accessibility affects the skilled deliveries conducted.
## 3 Statement of the Problem
According to the Kenya Health Sector Strategic and Investment Plan, households should be located within 5 kilometres access to medical services(MOH, 2013). In Kisumun County, the actual extent of primary healthcare services physical coverage and whether or not underserved pockets exist is not known. It is also not known how location of health facilities by way of their distance from the nearest major roads affects skilled deliveries conducted in Kisumu County in the context of achieving UHC.
This study sought to describe the physical coverage of public healthcare services in Kisumu County and identify any underserved pockets; if they exist. It also sought to analyze the effect of the distance of public health facilities to their nearest major roads on skilled deliveries conducted in order to inform the piloting of UHC in Kisumu County.
If it is established that the distance of a health facility to the nearest road remarkably affects the proportion of skilled deliveries conducted in Kisumu county, it will strongly reinforce the need of policy makers to rapidly improve the transport network of health facilities to the major roads as a means of improving the UHC tracer indicator of proportion of skilled deliveries conducted.
## 4 Justification
There is paucity of information of health access, in the context how far or near a health facility is to a major road, affects health indicators; more specifically, affects the conduct of skilled deliveries. Most studies conducted so far focus on the household's location from their nearest health facility either by way of evaluating how straight line distances from household locations to health facilities affects health indicators.
In a study in Zambia, [PERSON] et al used straight line distances form the study clusters to the nearest health center to examine how this affects skilled deliveries conducted. The assumption in the Zambia study, as is, in this study is that people will seek treatment in health facilities that are located closest to them.
Health access improvement can include the construction of more major roads or improving the existing ones and the construction of more health facilities in underserved areas. This study will provide information on how accessibility, as measured by a facility's distance to the nearest major road affects the conduct of skilled deliveries. This information will be useful for policy makers seeking to prioritize interventions for increasing skilled deliveries and hospital access as key tracer indicators for UHC index.
## 5 Research Question
What is the effect of the distances of public health facilities from their nearest major roads to the skilled deliveries conducted in Kisumu County?
### Broad objective
To determine the effect of the distances of public health facilities from their nearest major roads to the skilled deliveries conducted in Kisumu County.
### Specific Objectives
The specific objectives of the study were:
* To describe public health facilities accessibility in Kisumu County
* To describe the proportion of skilled deliveries conducted in Kisumu County
* To analyze the effect of the distances of public health facilities from their nearest major roads to the skilled deliveries conducted in Kisumu County.
## 6 Methodology
### Study Area
The study included public and semi - public (i.e. faith based and non - government) health facilities in Kisumu County that conduct deliveries. Private facilities were excluded in this study.
### Study Design
Figure 1: Conceptual framework
The study design was a retrospective chart review. Secondary data (electronic data) was mined from the Kenya District Health Information System (MoH, 2018).
### Sample Size
A total of 156 public and semi - public (i.e. faith based and non - government) health facilities in Kisumu County that conduct deliveries were included in the study.
### Data Collection
Secondary data (electronic data) was mined from the Kenya District Health Information System database ([[https://hiskenya.org](https://hiskenya.org)]([https://hiskenya.org](https://hiskenya.org))) (MoH, 2018). The indicator of interest was proportion of skilled deliveries conducted from January to December 2016.
We then obtained a list of public healthcare facilities in Kisumu County from the Kenya Master Facilities List which categorizes health facilities in Kenya by ownership type and level of care.
The health facilities' geo codes were obtained from the Kisumu County health facilities list of geo codes.
### Data Management and Analysis
The data was then exported to MS Excel software (Windows 10 version) processing and analysis. MS Excel was used to calculate the range, mean and standard deviation for the proportion of skilled deliveries.
In DHIS, the proportion of skilled deliveries per facility is calculated as follows:
\[\text{Proportion of skilled deliveries = Number of skilled deliveries conducted = 100 %}.\]
Estimated number of pregnant women in the catchment population
(1)
The health facilities geo codes were exported to QGIS software (Version 2.18.17). The locations of the health facilities were represented as points. The attribute of the proportion of skilled deliveries conducted was then assigned to each respective health facility.
We represented healthcare services physical coverage using a 5 kilometers radius fixed distance buffering using the health facilities' points as the hubs. We also generated Voronoi polygons to represent coverage.
We were then able to identify underserved areas, defined as geographical areas that are excluded from a 5 kilometers distance to a nearest public healthcare facility in whichever direction.
The distances of the health facilities to the nearest roads was generated using the _\"Distance to nearest hub\"_ processing tool in QGIS.
We then conducted a simple linear regression of the distances to the nearest roads (predictor variable, \(x\)) and the proportion of skilled deliveries conducted (outcome variable, \(y\)) using STATA software (Version 15).
### Quality Control
The data mined from DHIS was examined for completeness. A total of thirteen (13) facilities that had missing data were excluded from the analysis.
### Ethical Considerations
Ethical approval was not sought as this study did not involve human subjects.
### Study Limitations
This study was restricted to the effect of public and semi - public hospitals' distances to the nearest major roads on skilled deliveries conducted. The terrain was not factored in the analysis.
## 7 Results
There is a high concentration of primary healthcare facilities in Kisumu Central, Kisumu West and Seme sub counties. The two comprehensive referral hospitals are all situated in Kisumu Central Sub County.
There exist four pockets of underserved areas in Nyando, Nyakach and Muhoroni sub counties measuring 21 kms\({}^{2}\), 52 kms\({}^{2}\), 60 kms\({}^{2}\), 65 kms\({}^{2}\) and 94 kms\({}^{2}\).
Figure 2: Health facilities by level of care
Regarding the linear regression between the health facilities distances from the nearest road (predictor variable, \(x\)) to skilled deliveries conducted (outcome variable, \(y\)), the linear equation obtained was: \(y=50.727\) - \(4.4089x\). The coefficient of determination (R\({}^{2}\) value) was 0.023. The adjusted R-squared value was 0.018. The linear equation had a negative slope.
## 8 Discussion
The mean distance of the location of health facilities from the nearest major roads was 1.8 kilometers. By using the 5 kilometers radius buffer around health facilities, we established that there are underserved areas in Kisum County which are mostly located away from major roads and towns. The well served areas are mostly along major roads and towns. The largest of these pockets extends between Nyando and Muhoroni sub counties.
[PERSON] et al established in Zambia that the further away a pregnant woman resides from a health facility, the lesser the possibility of accessing skilled delivery services. The assumption in this study, as is in our study, was that a pregnant woman would seek skilled delivery in a health facility located closest to her residence.
In our study, the coefficient of determination, R\({}^{2}\)value, using the distance of health facilities from the nearest roads as the predictor variable and the proportion of skilled deliveries as the outcome variable was 2%. This is a low value. This means that whereas distance of a health facility to the nearest major road is inversely proportional to the skilled deliveries conducted as indicated by the negative slope of the linear equation (meaning that the further a facility is located from a major road, the lesser the skilled deliveries conducted in that facility and vice versa), such a model is still weak in establishing such an effect.
However, our study focused on the use of major roads. It did not take into account the use of minor roads and the terrain around the location of the health facilities.
The mean skilled delivery in Kisum County for the study period was 42. 7%, which is lower than the national target of 65% skilled deliveries conduct(MOH, 2013). Given that the skilled deliver level in Kisum County is much lower than the national target, it is possible that there are other factors besides just distance to health facilities. This could include
\begin{table}
\begin{tabular}{|p{28.5 pt}|p{28.5 pt}|p{28.5 pt}|p{28.5 pt}|p{28.5 pt}|p{28.5 pt}|} \hline
**Variable** & **Observation** & **Mean** & **SidDev** & **Min** & **Max** \\ \hline
**\% skilled deliveries** & 156 & 42.57429 & 52.083 & 0 & 357.8 \\ \hline
**Distance from road** & 156 & 1.849027 & 1.827 & 1.371 & 9.495 \\ \hline \end{tabular}
\end{table}
Table 1: Summary of proportion of skilled deliveries and facilities distances from the nearest roads.
Figure 4: Voronoi polygon on coverage
Figure 3: Underserved areas in Kisumu County
\begin{table}
\begin{tabular}{|p{28.5 pt}|p{28.5 pt}|p{28.5 pt}|p{28.5 pt}|p{28.5 pt}|} \hline
**Source e** & **SS** & **df** & **MS** & **Number of observations = 156** \\ \hline
**Mode 1** & 10062.1 & 1 & 10062.1715 & Prob \(>\) F = 195\% Conf. Interval \\ \hline
**Residual** & 410395.34 & 15 & 2664.90481 & R-squared = 0.0239 \\ & 342 & 4 & 0481 & Adj R-squared = 0.0176 \\ \hline
**Total** & 420457.1 & 15 & 2712.6 & Root MSE = 51.623 \\ & 513 & 5 & 2912 & \\ \hline \end{tabular}
\end{table}
Table 2: STATA Output: Linear regression of distance and proportion of skilled deliveriesthe other UHC tracer such as health care worker density, health access by way of ability to pay for transport to the health facilities and the quality of service offered at the health facilities. Future studies ought to investigate these.
Indeed, [PERSON] et al established that the effect of distance from a health facility was not significant after controlling for other variables. Women most commonly cited distance and/or lack of transport as reasons for not delivering in a health facility but over 60% gave other reasons including 20.5% who considered health facility delivery unnecessary, 18% who cited abrupt delivery as the main reason and 11% who cited high cost ([PERSON], 2013).
However, some facilities conducted skilled deliveries exceeding the national target. This can be explained by in referrals that cause such facilities to exceed their projected workloads.
## 9 Conclusion
There are pockets of underserved areas in Kisumu County regarding physical access to health care services. The mean skilled delivery in Kisumu County is lower than the national target. The distance of health facilities from the nearest major road does not significantly affect the conduct of skilled deliveries in Kisumu County.
## 10 Recommendations
There is a need to conduct a further study on how RMNCH disease indices, communicable disease indices, communicable disease indices, service capacity and access indices affect the UHC index in Kisumu County.
## References
* [1][PERSON] 2018 (2018) A concept for universal health coverage in Kisumu. Department of Health.
* [2][PERSON], [PERSON] 2013 (2013) Factors influencing place of delivery for women in Kenya: an analysis of the Kenya demographic and health survey, 2008/2009. BMC Pregnancy and Childbirth.
* [3][PERSON] 2013 (2013) HEALTH SECOR STRATEGIC AND INVESTMENT PLAN (KHSSP) JULY 2013-JUNE 2017[PERSON]
* [4][PERSON] 2018 (2018) District Health Information System. [[https://hiskenya.org](https://hiskenya.org)]([https://hiskenya.org](https://hiskenya.org)).
* [5][PERSON], [PERSON], [PERSON] 2011. The Influence of Distance and Level of Care on Delivery Place in Rural Zambia: A Study of Linked National Data in a Geographic Information System. PLOS Medicine.
* [6][PERSON]. [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] AND [PERSON] 2008. Accessibility and [PERSON] of delivery care within a Skilled Care Initiative in rural Burkina Faso. Tropical Medicine and International Health13 (44-52).
* [7][PERSON] 2008 (2008) Trends in Maternal Mortality: 1990 to 2008. Geneva, The World Bank.
* [8][PERSON] 2017 (2017) Tracking Universal Health Coverage: 2017 Global Monitoring Report. World Health Organisation.
* [9][PERSON] 2014 (2019) Determining health-care facility catchment areas in Uganda using data on malaria-related visits. World Health Organisation Bulletin.
|
isprs
|
EFFECT OF THE DISTANCES OF PUBLIC HEALTH FACILITIES FROM THE NEAREST MAJOR ROADS ON SKILLED DELIVERIES CONDUCTED IN KISUMU COUNTY, KENYA
|
K. Rombosia, E. Oele, N. Rangara, J. Mwaura, B. Mitto, E. Ondura, D. Onyango, C. Akoth
|
https://doi.org/10.5194/isprs-archives-xlii-4-w14-203-2019
| 2,019
|
CC-BY
|
isprs/6b213e32_aa20_4efa_8164_a0ca7f646da2.md
|
# Automatic Extraction of Building Outline from High Resolution Aerial Imagery
[PERSON] Wang
EagleView Technology Corp.
25 Methodist Hill Dr. Rochester, NY 14623, the United States [EMAIL_ADDRESS]
###### Abstract
In this paper, a new approach for automated extraction of building boundary from high resolution imagery is proposed. The proposed approach uses both geometric and spectral properties of a building to detect and locate buildings accurately. It consists of automatic generation of high quality point cloud from the imagery, building detection from point cloud, classification of building roof and generation of building outline. Point cloud is generated from the imagery automatically using semi-global image matching technology. Buildings are detected from the differential surface generated from the point cloud. Further classification of building roof is performed in order to generate accurate building outline. Finally classified building roof is converted into vector format. Numerous tests have been done on images in different locations and results are presented in the paper.
Point Cloud, Extraction, Building, Image, Outline, Automation.
## 1 Introduction
Building outline is important geospatial information for many applications, including planning, GIS, tax assessment, insurance, 3D city modelling, etc. Extracting buildings/building outlines automatically from digital images has been an active research area in both photogrammetry and computer vision communities for decades. Numerous approaches have been developed to extract buildings automatically from digital imagery or elevation data such as LiDAR data. Review on methods for automatic building extraction from digital imagery can be found in [PERSON], 1999 and [PERSON], 2004. Some of methods for automatic extraction and reconstruction of buildings using LiDAR data are discussed in [PERSON] and [PERSON] 2008. In this paper, an automated approach for extraction of building outline from digital imagery is presented. The proposed method uses both spatial information of objects derived from the imagery using computer vision technique and spectral information to detect buildings and delineate building outline accurately. It consists of four major steps, i.e. generation of point cloud from imagery using semi-global image imaging technology, detection of buildings from the generated point cloud, classification of building roof and creation of building outline in vector format. Automatic generation of point cloud is based on semi-global image matching. A hierarchical approach is implemented in image matching in order to achieve high accuracy of point matching. In this approach, matching starts with low resolution images and the match results are used as reference of matching at next level. After image matching is finished, 3D coordinates are computed for every matched point.
Once point cloud is generated, an approximate bare-earth surface is created. There are various approaches for computation of bare-earth surface from LiDAR data. A polynomial surface fitting algorithm is used for automatic generation of bare-earth surface in this paper. After bare-earth surface is generated, a differential surface is created by subtracting the bare-earth surface from the point cloud. Buildings are detected by checking the gradients of elevation in the created differential surface and their location and size are computed.
In order to generate accurate building outline, building roof is classified further using pixel's radiometric features. Classification of building roof starts with generation of seed areas which are smooth and have uniform surface properties. Once seed areas are generated, a number of spatial and spectral properties in these areas are computed and used as a \"pattern\" for classification of building roof. In order to eliminate the effect of trees, algorithms for detection of trees are developed to detect both green and leaf-off trees.
After building roof is classified, a rectangular building outline is generated for the classified building roof. First the edge pixels of the classified building roof is traced and a closed polygon is generated. To create rectangular shape outline, a split-and-merge process is applied to split the traced building boundary into segments based on the curvature of the polygon. For each line segment, a straight line is fitted. Once all line segments are fitted, intersections between consecutive segments are computed as roof corners.
The developed approach has been tested with a large number of images. The test results and their statistical numbers will be presented in section 5 this paper.
## 2 Building Model
Basically automatic extraction of objects from imagery includes two major tasks. One is the automatic recognition of objects while the other is to locate objects accurately. In order to recognize objects from the imagery, the properties of objects to be extracted should be used. For automatic extraction of buildings, some generic knowledge on buildings should be used in the extraction process. The following covers some generic knowledge of a building:
1. A building has certain height
2. A building has certain size
3. A building usually has regular shape
4. A building has smooth roof surfacee. A building usually has homogeneous roof surface f. A building may be occluded by trees or shadow.
Properties from a to d are related to the geometric properties of a building and property e is radiometric property of a building while the last item is the knowledge on contextual relation between a building and other objects. These properties are used in the following extraction processes.
## 3 Automatic generation of point cloud from digital imagery
In the last few years, significant progress has been made in automatic generation of point cloud from stereo image pairs in computer vision. One of the best technologies in automatic point cloud generation is Semiglobal image matching developed by [PERSON] (2008). The main advantages of semiglobal image matching is that it tries to find corresponding point for every pixel in the image. Since it uses global information rather than local information in a small window, matching results is more reliable, especially in poor texture areas such as road surface, building roof, water surface, etc. In this paper, point cloud is generated by using semiglobal image matching and it consists of three major steps:
* Computation of orientation parameters of image pair b. Rectification of image pair c. Image matching and generation of point cloud
### Computation of Orientation Parameters and Rectification of Image Pair
There are two ways to compute accurate orientation parameters of the images. One is to triangulate all images in the area in one block or sub-blocks while the other is to compute orientation parameters of one pair at one time. The general procedure for these two ways is same. Basically they include feature extraction, feature matching and bundle adjustment with the match points. In this study, Affine-Scale Invariant Feature transform (ASIFT) operator ([PERSON] and [PERSON], 2009) is used for extraction and matching. Match feature points are then used to compute images' orientation parameters by using least squares bundle adjustment.
Before image matching, images have to be rectified so that both images have similar image scale and more importantly the corresponding points are located on the same row on both images. The standard rectification process is used in this paper to generate rectified images.
### Image Matching by SemiGlobal Matching (SGM)
SGM is a new image matching approached developed by computer vision scientist in recent years ([PERSON], 2008). Unlike traditional image matching, SGM uses a cost as the measure of image matching. The entropy is computed by double convolution of joint probability function with a Gaussian kernel function which defined as:
\[\begin{split}& H_{I,J_{2}}=\sum h_{I_{1},J_{2}}\\ & h_{I_{1},J_{3}}(p,d)=-\log[P_{I_{1},J_{2}}(p,d)\otimes g]\otimes g /n\end{split} \tag{1}\]
where \(P_{I_{1},J_{2}}(p,d)\) is the joint probability of pixel p in target image with the pixel at a disparity d on the match image. g is the Gaussian kernel function and n is the size of the window.
The matching cost is defined by the Mutual Information (MI) as:
\[\begin{split} c(p,d)=-MI_{I_{1},J_{2}}(p,d)=-h_{I_{3}}(p)-h_{I_{ 2}}(p,d)+h_{I_{1},J_{3}}(p,d)\end{split} \tag{2}\]
In order to reduce mismatch, the smoothness constraints are introduced in computing the energy which is defined as:
\[\begin{split} E(D)=\sum_{p}(c(p,d)+\sum_{g\in N_{p}}\!\
[PERSON], 2004. In this paper, a polynomial surface fitting is used to generate bare-earth surface from the point cloud, which has the following form:
\[Z=\sum_{k=0}^{n}a_{k}X^{k}Y^{n-k}+b \tag{4}\]
where n is the order of polynomial function which is determined by the magnitude of terrain relief.
The generation of bare-earth surface is an iterative process in which an initial surface is created using all points from the point cloud as shown in Figure 2. After the initial surface is created, points above this surface are excluded and only the remaining points are used to create a new surface. This process is repeated until the variance of elevation between the created surface and the points used to create this surface is smaller than given threshold. After bare-earth surface is created, a differential surface is generated by subtracting bare-earth surface from the original point cloud. In the differential surface, only objects above the terrain surface such as buildings and trees are represented. This facilitates the detection of buildings.
### Automatic Detection of Trees
Once differential surface is created, some trees can be detected by checking the variance of surface normal directions ([PERSON] and [PERSON], 1997). However, this approach may not work for trees with leaf-off. In order to detect trees without leafs, an algorithm based the variance of edge direction is proposed. Figure 3 shows the edge information extracted from a portion of the image. As shown in the figure, only main structures of buildings such as ridge lines and roof edges are represented while there is a lot of variance on trees. The variance of trees is computed by
\[E_{v}=\sum_{m}\sum_{n}(O_{y}-\overline{O})*(O_{y}-\overline{O})/N \tag{5}\]
Where m, n are the dimensions of the window, O\({}_{\text{ij}}\) is the orientation of an edge at pixel (i,j), \(\overline{O}\) is the average orientation of the edge, N is the total number of pixels in the window.
Figure 4 shows the result of automatic tree detection. As can be seen, trees with and without leafs are extracted nicely.
### Automatic Detection of Buildings
After trees are detected from the image, they are excluded from the differential surface. The gradient of elevation in the differential surface is then computed and building boundaries are detected by thresholding the computed elevation gradient. The entire building roof is detected by comparing the elevation of neighboring pixels against the elevation of the detected building boundary. Figure 5 show the result of building detection.
To extract building outline more accurately, pixels within the areas determined by the detected buildings are classified further by using the radiometric properties of building. Seed areas from the detected buildings are first selected and features are computed in the selected seed areas. Once the features of seed areas are computed, the features of other pixels in the defined building areas are computed and compared with the features of the seed areas. If their difference is within the defined threshold, they are classified as building roof. Some of classification results are shown in Figure 6.
### Automatic Generation of Building Outline
For many applications, it is not convenient to use the extracted buildings in raster format. Building outlines should be created in vector format. To convert the classified building roof from raster to vector, the edge of classified building roof is traced first. The traced roof edge is a closed polygon. It is then split by applying split-and-merge process. The traced edge is split into two segments at the midpoint at the beginning. The splitting is done by checking the distance of a point on the edge to the connection of two end points of the segment. The edge is split at the point with maximum distance. The split segments are split again until the maximum distance of edge points is less than the given threshold. After the splitting is finished, the orientations of two consecutive segments are compared at every splitting point and the segments are merged if they have similar orientations. A line is fitted to every split segment and the intersection between two consecutive segments is computed as the corner of building. Figure 7 shows the extracted building outline.
Due to the effect of occlusion such as trees or shadow cast by trees, the extracted building outline in occluded areas is usually not right and some corner points may be missed. To reduce the effect of occlusion, the occlusion areas should be detected and the missing edge segments and corners should be found. Since a building usually has a regular shape, the occlusion area can be detected by checking the regularity of the traced edge. Once the occlusion area is detected, the missing edge segments and corner points can be inferred by the knowledge of roof structure near the occlusion area. Figure 8 shows the extracted building outline with correction of occlusions.
## 5 Tests
The developed approach has been tested on a large number of aerial images. The test images have a GSD of 10 cm and cover an area of over 100 square kilometers. The test area contains newly developed subdivisions without big trees, relatively old subdivisions with a lot of trees between buildings or surrounding buildings and some established subdivisions with some big trees surrounding buildings. Some test results without trees, with trees and shadows are shown in Figures 9 to 11 respectively.
Some statistical numbers of the tests are given in Table 1. As shown in the table, a very high accuracy of building extraction
Figure 5: Automatically Detected Buildings
Figure 8: Extracted Building Outline with Correction of Occlusion
Figure 6: Classified Building Roof
Figure 7: Extracted Building Outlinehas been achieved with the developed approach. For open areas, the accuracy of automatically extracted building outline is 99.3% and 89.3% of the automatically extracted building outlines are accurate. Only half percent of the extracted buildings are false and about the same percentage of buildings are not extracted. In partially occluded areas, 1,128 buildings among 1153 extracted buildings are true buildings, which accounts for 97.8% of the total buildings. The percentage of both false buildings and missed buildings is around one percent. These show that very reliable building outlines can be extracted in both open area and partially occluded area.
## 6 Conclusions
A new approach for automatic extraction of building outline from high resolution imagery has been developed. The developed approach uses both geometric properties and radiometric properties of building to recognize buildings and delineate their boundaries accurately. The test results show that 99% accuracy has been achieved in open area and about 98% of the extracted buildings are correct. About 90% of the extracted building outlines are very accurate and can be used for various applications.
## References
* [PERSON] (2004) [PERSON], 2004. Object Extraction and Revision by Image Analysis using Existing Geodata and Knowledge: Current Status and Steps towards Operational Systems. ISPRS Journal of Photogrammetry & Remote Sensing, 58 (2004), pp. 129-151.
* [PERSON] and [PERSON] (1997) [PERSON] and [PERSON], 1997. Extracting Buildings from Digital Surface Models. IAPRS, Vol. 32, Part 3-4W2, pp. 27-34.
* [PERSON] and [PERSON] (2008) [PERSON] and [PERSON], 2008. A Comprehensive Automated 3D Approach for Building Extraction, Reconstruction, and Regularization from Airborne Laser Scanning Point Clouds. Sensors 2008, 8, pp. 7323-7343.
* [PERSON] (2008) [PERSON], 2008. Stereo Processing by Semiglobal Matching and Mutual Information. IEEE transactions on Pattern Analysis and Machine Intelligence, Vol. 30, No. 2, pp. 328-341.
* A Survey Focusing on Buildings. Computer Vision and Image Understanding, Vol. 74, No. 2, pp. 138-149.
* [PERSON] and [PERSON] (2009) [PERSON] and [PERSON], 2009. ASIFT: A new framework for fully affine invariant image comparison. SIAM Journal on Imaging Sciences, 2(2), 438-469.
* [PERSON] and [PERSON] (2004) [PERSON] and [PERSON], 2004. Experimental Comparison of Filter Algorithms for Bare-Earth Extraction from Airborne Scanning Point Clouds. ISPRS Journal of Photogrammetry & Remote Sensing, 59 (2004), pp. 89-101.
Figure 11: Extracted Building Outlines in Heavily Occluded Area
Figure 10: Extracted Building Outlines in Partial Occluded Area
Figure 9: Extracted Building Outlines in Open Area
|
isprs
|
AUTOMATIC EXTRACTION OF BUILDING OUTLINE FROM HIGH RESOLUTION AERIAL IMAGERY
|
Yandong Wang
|
https://doi.org/10.5194/isprs-archives-xli-b3-419-2016
| 2,016
|
CC-BY
|
isprs/2c64bbf0_3632_46c4_837c_5be9f88b9bb9.md
|
# Tree Canopy Height Estimation and Accuracy Analysis Based on UAV Remote Sensing Images
[PERSON]
1
[PERSON]
1
[PERSON]
1
[PERSON]
2 State Grid Ningxia Electric Power Co.Ltd, Ningxia 750001, China;
[PERSON]
[PERSON]
1
[PERSON]
3 School of Geodesy and Geomatics, Wuhan University, Wuhan 430079, China
###### Abstract
The development of unmanned aerial vehicle (UAV) technology makes the traditional method of danger tree monitoring more digital and convenient. In order to explore the accuracy of tree height measurement by general consumer UAV, this paper takes a campus of University as the research area. The high-resolution images acquired by UAV were processed and compared the accuracy of extracting tree height based on the digital canopy height model (CHM) and point cloud information, respectively. The experimental results show that the mean absolute error of tree height based on point cloud extraction is 10.28 cm, which is better than that based on CHM. The tree height extracted from CHM will be less than the measured height. In addition, the accuracy of extracting the height of round crown is better than that of cone crown. The correlation between the measured and true values of the two methods was 0.987 and 0.994, respectively, which indicates that the method of danger tree monitoring by UAV is feasible.
UAV, Danger tree monitoring, CHM, Point cloud, Accuracy verification. +
Footnote †: Corresponding author
## 1 Introduction
The height of the vegetation canopy is an important indicator both in ecological terms and in terms of the safe operation of the transmission line. The height of trees will threaten the safe operation of power grids and cause accidents such as tripping and short circuits. Fast and accurate estimation of the height of trees in the transmission line can enable early detection of tree hazards, which is of great significance to ensure power supply ([PERSON] et al., 2019; [PERSON] et al., 2020).
At present, the methods of measuring tree height are mainly divided into traditional measurement and remote sensing inversion. Traditional methods mainly rely on altimeters or laser rangefinders to measure the height of trees. The working efficiency is low, and the quality of the instrument and human factors can affect the measurement accuracy ([PERSON] et al., 2011; [PERSON] et al., 2015; [PERSON] et al., 2016). At this stage, there are several remote sensing techniques to invert the height of trees. Polarized interferometric synthetic aperture radar inversion technique (Pol-InSAR) is able to obtain vegetation canopy height over a large area, but it is still in the experimental stage, and the accuracy is not high in practical applications ([PERSON] et al., 2017; [PERSON] et al., 2017; [PERSON] et al., 2016). Airborne lidar and the ground-based radar (GBR) can accurately extract the tree height, but the cost is high, and it will cost a lot in large-scale measurement ([PERSON] et al., 2016; [PERSON] et al., 2019). UAV have the advantages of low cost, easy operation and flexible acquisition cycle, and can acquire high-resolution images when equipped with optical sensors. It has been widely used in topographic mapping, agricultural production and other fields.
Many scholars have studied the use of UAV to measure tree height. Some scholars used UAV high-resolution images and ground data as the basis to build a canopy-tree height model to extract tree height ([PERSON] et al., 2017; [PERSON] et al., 2009). He and others obtained forest resource information such as tree height, 3D coordinates, crown width and area by using UAV stereo photography technology ([PERSON] et al., 2020). At present, tree height extraction using UAV is mainly divided into two methods: based on CHM model and point cloud extraction. [PERSON] et al. used CHM to extract forest parameters such as number of trees, single tree height and canopy density, which shows that this method can replace manual measurement ([PERSON] et al., 2019). The resolution of CHM and the size of the sliding window can affect the accuracy of tree height extraction ([PERSON] et al., 2019). Some scholars use the maximum interclass variance method to split the tree point cloud into two parts, tree point cloud and ground point cloud, to obtain the tree height ([PERSON] et al., 2017). It is difficult to obtain ground point cloud data in densely forested areas, but using total station to measure terrain data can effectively improve this problem and finally obtain more accurate tree height under high canopy density ([PERSON] et al., 2019).
In this paper, we used a general consumer-grade UAV to acquire high-resolution optical images of the forest area near a Campus of University. Based on the digital canopy height model and point cloud data, we calculated the height of some trees in the study area. We compared the tree height accuracy obtained by different methods, and analyzed the difference of extraction accuracy between different tree species.
## 2 Study Area and Data Preparation
### Study Area
The study area is located near a Campus of University, which is situated in the northeastern part of Guangxi Zhuang Autonomous Region. It is in a low latitude (between 24\({}^{\circ}\)59\({}^{\circ}\) 52\({}^{\circ}\) and 25\({}^{\circ}\)14\({}^{\circ}\) 17\({}^{\circ}\) north latitude and 110\({}^{\circ}\)14\({}^{\circ}\) 46.31\({}^{\circ}\) and 110\({}^{\circ}\)29\({}^{\circ}\) 13.96\({}^{\circ}\) east longitude). It has a subtropical monsoon climate with abundant rainfall and mild climate. The vegetationin the study area is mainly planted with Osmanthus fragrans and Ficus.
### Datasets
We measured the height of 50 trees in the study area in May 2021, mainly Osmanthus fragrans and Ficus. We measured the height of the trees using the remote elevation measurement (REM). We estimated the height of each tree by taking the mean of three height measurements using a total station. The measuring instrument was a Nikon total station (\(\pm\)2+2 ppm\(\times\)D) mm).
An off-the-shelf micro-quadcopter is chosen for this study, and the UAV model is DJI PHANTOM 4 RTK. It is equipped with an RGB camera, and the main parameters are shown in Table 1. The weather conditions at the time of image acquisition were sunny and breezy. In order to generate sufficient point cloud density to improve the accuracy of tree height extraction, the flight height of the UAV was set to 80m, the heading overlap rate was 85%, and the side overlap rate was 75%. In addition, we used RTK to measure control point coordinates, which were used to improve the accuracy of post-processing of aerial survey data. A total of 378 images were taken for this experiment, with an aerial survey area of 0.062 square kilometers, and the image acquisition process took a total of 24 min.
## 3 Methodology
### Data processing
Due to the lack of information and geometric distortion of the image caused by factors such as light, wind deleted direction and air flow, we first checked the image and the unqualified image. We use Pix4D software to process the image. Firstly, we carry out matching correction and image correlation (called stabbing points) based on the coordinates of image control points. Next, aerial triangulation is carried out to finally obtain the 3D point cloud and 3D model with coordinates. In addition, the results of Pix4D processing include some 2D products, such as digital surface model (DSM), digital elevation model (DEM) and orthophoto.
### Extraction of tree height
#### 3.2.1 Tree height extraction based on CHM
The extraction of tree height using CHM is mainly divided into two steps: the first step is to build the model, and the second step is to extract the height of tree vertices. CHM records information such as tree height and crown size. We obtained CHM by DSM (representing the change in tree surface height) from DEM (representing the change in ground height) in the study area. The resolution of CHM is 0.05 cm/pix, and the model is shown in the figure below.
We use the neighborhood analysis tool in ArcMap software to analyze the neighborhood of CHM and get the tree vertices. Based on the characteristics of the tree canopy, we use a circular window for neighborhood analysis, and the window radius is determined after several tests based on the size of the tree canopy and the resolution of the CHM. We set the neighborhood radius size to 150 pixels and use the focus statistics tool to extract the maximum value in the neighborhood as the undetermined tree vertices. Next, we combine the orthophotos to remove the wrong \"tree vertices\" located on roads or buildings to get the tree height information in the study area. The extraction results are shown in Figure 5, where the red points are the tree vertices of the trees in the study area. Due to the different canopy sizes, the vertices of a small number of trees were not identified during the neighborhood analysis.
\begin{table}
\begin{tabular}{|l|c|} \hline Parameter & Value \\ \hline Maximum flight speed & 72 km/hr \\ \hline PTZ range & -90° to + 30° \\ \hline Camera lens & FOV94\({}^{\circ}\)/2.8 \\ \hline Equivalent focal length & 20 mm \\ \hline Effective pixel & 12.4 million pixels \\ \hline \end{tabular}
\end{table}
Table 1: Detailed parameters of UAV.
Figure 1: The location of the study area. The red rectangle illustrates the coverage of study area.
Figure 2: Digital surface model.
#### 3.2.2 Tree height extraction based on point cloud
After processing the images, we obtained about 47.36 million points in the study area, with a density of about 248.02 points/m3. The point clouds are of high quality and can be used as data for tree height extraction.
Firstly, we filter the 3D point cloud to prevent isolated noise points from affecting the extraction results. Next, we extract single wood point clouds, mainly including ground point clouds and vegetation point clouds, and exclude the interference of other trees as much as possible to prepare for the extraction of tree height. The point cloud extraction results for a single tree are shown in Figure 6 and Figure 7.
Based on the distribution of the number of point cloud height values, it is easy to divide the single wood point cloud into ground point cloud and vegetation point cloud. Because the terrain under the tree is relatively flat, we take the average elevation of the ground point cloud as the ground elevation, and then take the maximum elevation in the vegetation point cloud as the tree vertex. Finally, the subtraction of the two is the height of the target tree.
Figure 4: Digital canopy height model.
Figure 5: Extraction results of tree vertices.
Figure 3: Digital elevation model.
Figure 6: The results of point cloud
Figure 7: Ficus.
## 4 Result Analysis
### Extraction results based on CHM
We compare the tree height extracted by CHM with the tree height measured by total station, and calculate the correlation and mean absolute error between them. The calculation results are shown in Figure 8.
By observing the distribution of absolute errors, it can be found that most of the measurement errors of trees are below 50 cm, and the errors of only a few trees are above 60 cm. The trees with the largest absolute errors are focus. It is speculated that the main reason is that some points at the top of the tree are discarded in the process of generating DSM from point cloud, while the generated DEMs are large due to the large roots of focus. Two reasons lead to the final tree height is lower than the real height.
### Extraction results based on point cloud
The following are the results of a comparison and analysis of the tree heights obtained by point cloud extraction and the measured tree height:
The analysis shows that the absolute errors based on point cloud extraction are less than 30 cm, and only one tree has an error greater than 40 cm. The reason may be that there are still isolated noise points in the point cloud after filtering.
### Extraction results based on different tree species
In addition to the extraction method, the different types of trees will also affect the extraction accuracy. To verify the effect of tree type on the extraction results, we compared the extraction results of two major tree species in the study area, Osmanthus fragrans and Ficus, and calculated the correlation coefficients and mean absolute errors. The results are shown in Table 2.
Figure 11: MAE of Point Cloud.
Figure 8: Correlation based on CHM.
Figure 10: Correlation based on Point Cloud.
Figure 9: MAE of CHM.
The height value of focus extracted based on CHM method is significantly lower than those obtained by point cloud extraction and the real tree height, and its mean absolute error is 34.7 cm. The tree height extracted based on point cloud is close to the real tree height, with a mean absolute error of 13.45 cm. There are two main reasons for the error. One is that the branches of the focus are scattered and the crown is tapered, so some of the tree vertices are smoothed during the DSM generation resulting in a reduction of the extracted tree height values. On the other hand, the roots of some focus grow on the ground, and the thick roots are mistakenly treated as the ground in the process of generating DEM, resulting in an increase in ground elevation. These two reasons lead to the low height values of trees extracted by CHM method.
Compared with focus, the extracted tree height values of Osmanthus fragments are significantly more accurate. The mean absolute error of tree height extracted by the two methods is 8.2 cm and 7.1 cm respectively. There is little difference between the height of Osmanthus fragments extracted by the two methods and the real tree height. The crown of Osmanthus fragments is relatively flat, so it is relatively easy to identify, while the crown of focus grows around and the top is sharp and scattered, so it is not easy to identify. At the same time, some point clouds in the focus are easy to be removed as isolated noise. The crown shape can affect the extraction of tree height. The extraction accuracy of tree height of tree species with flat and smooth crown is higher than that of tree species with sharp and scattered crown.
### Overall analysis
On the whole, both the CHM method and the point cloud method can extract the tree height accurately, and the latter has better extraction accuracy than the former. Different methods will affect the extraction results. The tree height extracted based on CHM method is slightly lower than the real value, which may be caused by the deletion of isolated tree vertices, and the tree height extracted based on point cloud is close to the real tree height. The extraction results of different tree species are also different. The extraction accuracy of tree height of flat crown is higher than that of conical crown. The correlation between the two methods is 0.987 and 0.994, indicating that using UAV for danger tree monitoring is feasible.
## 5 Conclusions
In this paper, we extracted tree canopy heights using a general consumer UAV and analyzed the effects of tree species and different methods on the accuracy of tree height extraction. In general, the UAV can extract the tree height accurately, the maximum error of tree height in this experiment is 67.4 cm, the minimum error value is 0.1 cm, most of the error is less than 50 cm. The tree height extracted from the point cloud is closest to the real value, and the mean absolute error is 10.28 cm. The results show that it is feasible to use general consumer UAV for danger tree monitoring.
The crown shape can affect the extraction of tree height. The minimum mean absolute error of Osmanthus fragments is 8.2 cm, and the determination coefficient R\({}^{2}\) between the extracted value and the real value is 0.966. The minimum mean absolute error of focus is 13.45 cm, and the determination coefficient R\({}^{2}\) between the extracted value and the real value is 0.987. Compared with the tree species with sharp and scattered crown, the tree height extraction accuracy of tree species with flat and smooth crown is higher.
In addition to extracting tree height and canopy information, UAV can also establish a model to obtain tree characteristic factors such as canopy density, volume and density. Compared with traditional manual monitoring and lidar monitoring, UAV has the advantages of high speed and low cost, and can improve work efficiency. However, this technology still has disadvantages. Extracting the height of trees requires ground image information, but UAV can only collect the information of the top of trees in dense woods. Canopy occlusion will cause the loss of ground data and affect the extraction of tree height. In addition, UAV is dangerous to fly in the woods. As a result, more research is needed to determine how to better use UAV to monitor dangerous trees and improve measurement accuracy.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Setting} & \multicolumn{2}{c|}{CHM} & \multicolumn{2}{c|}{Point cloud} \\ \hline Species & Quantity & R\({}^{2}\) & MAE & R\({}^{2}\) & MAE \\ \hline Ficus & 25 & 0.986 & 34.7 & 0.987 & 13.45 \\ Osmanthus & 25 & 0.966 & 8.2 & 0.962 & 7.1 \\ fragrans & & & & & \\ \hline \end{tabular}
\end{table}
Table 2: Extraction results of different tree species.
Figure 12: Comparison of Ficus height.
Figure 13: Comparison of Osmanthus fragrans height.
## Acknowledgements
This work was supported by the Key R&D project of Ningxia Hui Autonomous Region (Grant No. 2021 BDE931027); Science and technology project of State Grid Ningxia Electric Power Co.Ltd (Grant No. 5229 DK2004P).
## References
* [1]C. C.
|
isprs
|
TREE CANOPY HEIGHT ESTIMATION AND ACCURACY ANALYSIS BASED ON UAV REMOTE SENSING IMAGES
|
J. Hao, Z. Fang, B. Wu, S. Liu, Y. Ma, Y. Pan
|
https://doi.org/10.5194/isprs-archives-xliii-b2-2022-129-2022
| 2,022
|
CC-BY
|
isprs/bdb76401_e51d_4568_80f4_1b8b40ce530e.md
|
forest (RF) algorithm and a model based on the fully convolutional neural networks (FCN) architecture ([PERSON] et al., 2023) are comparatively analysed, using both Landsat and Sentinel data.
### Study site and data
We conducted our study in the Dabie Mountains, which are located at the junction of Anhui, Hubei, and Henan provinces, China. The Dabie Mountains are about 380 kilometers east and west and 175 kilometers north and south. It is a special distribution area of rare and endangered wild animals and plants in China, and is a national nature reserve. However, there is a lack of research on long time-series large-scale canopy height mapping.
The forest range and GEDI (Global Ecosystem Dynamics Investigation) footprint distributions in the Dabie Mountains are shown in Figure 1. The forest area of the Dabie Mountains was derived after applying a forest type coverage mask to the data. Each footprint of the GEDI data has a width of 25 m, with a spacing of 60 m between consecutive footprints. Additionally, there are 8 parallel tracks simultaneously sampling the area ([PERSON] et al., 2023).
### Data
In this study, the primary data utilized include spaceborne lidar data from GEDI, optical remote sensing data from Sentinel-2 and Landsat 8.
#### 2.2.1 GEDI data
NASA's Global Ecosystem Dynamics Investigation (GEDI) provides precise measurements of forest canopy height, canopy vertical structure, and surface elevation. GEDI covers regions within \(51.6^{\circ}\) N and S worldwide and operates from 2018 to 2023, collecting a lot of ground data. This study used GEDI L2A Version 2 data product from the Dabie Mountains obtained throughout 2019. The Version 2 has higher positioning accuracy compared to the Version 1. The GEDI L2A product features ground elevation, canopy top height, and relative height (RH) metrics, making it highly suitable for simulating canopy height and widely used in large-scale forest research ([PERSON] et al., 2023; [PERSON] et al., 2020; [PERSON] et al., 2023).
#### 2.2.2 Remote sensing images
Google Earth Engine (GEE) dataset provides rich historical images. This study obtained Landsat 8 and Sentinel-2 images for the entire year of 2019 based on the GEE platform, and completed cloud removal and fusion to obtain two images of the Dabie Mountains in 2019, respectively.
#### 2.2.3 Auxiliary data
In order to remove non forest areas from the study area, we used the 2020 global 30m land-cover map publicly released by [PERSON] et al. (2021).
### Methodology
### Overall workflow
In this study, we employed Landsat 8 and Sentinel-2 images as training data for feature extraction and prediction. GEDI data was used as reference data to extract canopy height information.
In order to compare the performance of different data sources on the same model, Landsat 8 and Sentinel-2 images were used to construct CHMs based on RF, respectively. In order to compare the performance of different models on the same data source, we leverage the FCN model generated by [PERSON] et al. (2023) to conduct a canopy height prediction focused on the Dabie Mountains region, which was compared with our RF model. During the construction of the FCN model by [PERSON] et al. (2023), five separate models were trained with different weights. Then the resulting five models are then strategically fused to obtain an optimal outcome. Furthermore, we fine-tuned the FCN model by retraining with Sentinel-2 data and GEDI data from the Dabie Mountains region.
Upon acquiring the CHMs, we conducted precision evaluations and comparative analyses of the final predictions using GEDI data as validation datasets.
Figure 1: Forest range and GEDI footprint distribution in the Dabie Mountains, Central China.
Figure 2: Framework of the research methodology.
To ease reading, the CHMs based on RF with Landsat 8 or Sentinel-2 as input are referred to as L8 RF or S2 RF, respectively. The CHM based on the FCN model with Sentinel-2 as input was referred to as S2 FCN-5 and the CHM based on the FCN model which has undergone retraining is referred to as S2 FCN-1. 1 and 5 represent its model trained with one or five model weights. The framework of the research methodology is shown in Figure 2.
### Random forest
RF is a powerful machine learning algorithm widely applied in data mining and predictive modeling. It is an ensemble learning method that predicts outcomes by constructing multiple decision trees and combining their outputs ([PERSON], 2001).
In this study, we chose RF as our predictive model to estimate forest canopy height. We integrated a variety of feature variables, including nine vegetation indices and six feature components in addition to the original bands of the remotely sensed image. By inputting these features into the RF model, we were able to generate comprehensive and extensive CHMs by regression prediction using canopy height data provided by GEDI and optical image data.
### Fully convolutional neural network
FCN is a deep learning model based on convolutional neural networks (CNNs) that lacks fully connected layers and consists solely of convolutional layers ([PERSON] et al., 2015). FCN is primarily employed for image segmentation tasks, aiming to generate pixel-level predictions for images. In semantic segmentation, FCN can predict the class membership of each pixel, thus achieving the objective of partitioning the image into distinct regions. The key advantage of FCN is its ability to handle input images of arbitrary sizes, as convolutional operations can be performed on inputs of any dimensions. Furthermore, FCN can produce output maps with the same dimensions as the input image.
In this study, FCN is applied to extract tree canopy height information from Sentinel-2 optical satellite images. The model combines sparse height data from the GEDI spaceborne lidar mission with Sentinel-2 satellite imagery to map the canopy height.
## 4 Experiment
The main steps in constructing CHM in this study include data preprocessing, model construction, accuracy assessment and comparison.
### Preprocessing
The preprocessing of GEDI mainly involves data screening and conversion into rasters, while the preprocessing of remote sensing image data focuses on cloud removal, fusion and feature extraction.
#### 4.1.1 GEDI data
In order to obtain high-precision canopy height, we filtered all GEDI data obtained in 2019. Mainly utilizing the attributes of L2A level products ([PERSON] et al., 2022; [PERSON] et al., 2021), the following conditions were set:
1. Collected at night.
2. In power beam mode.
3. Sensitivity not less than 0.9.
4. Quality flag equal to 1.
5. Degrade flag equal to 0.
6. The ground elevation of the GEDI footprint location differs from the STRM elevation by less than 50 meters.
Only footprint points that meet all quality requirements will be retained. Based on the 2020 land classification data released by [PERSON] et al. (2021), remove the GEDI footprints that fall outside the forest. We use the 95% energy return height relative to the ground (RH95) of the suggested result for each laser footprint as the canopy height obtained by GEDI ([PERSON] et al., 2021).
#### 4.1.2 Landsat 8 images
We obtained Landsat 8 images with cloud coverage less than 20% in 2019 from USGS Landsat 8 Level 2 Collection 2 at GEE, and selected 6 bands (bands 2 to 7) suitable for classification.
Cloud removal procedures were applied to each image, followed by a fusion process utilizing the median value. Subsequently, the calculation of vegetation indices, principal components analysis (PCA), and Tasseled-Cap (T-C) transformation were carried out to obtain 9 vegetation indices, 3 PCA components, and 3 T-C transformation components. The vegetation indices are shown in Table 1.
The optical bands, indices, and components form Landsat 8 were combined as input data for the RF model.
#### 4.1.3 Sentinel-2 images
The Sentinel-2 images are sourced from GEE's Harmonized Sentinel-2 MSI: MultiSpectral Instrument, Level-2A dataset, which selected 12 bands other than Band 10 and sampled them all to a resolution of 10 meters. The image filtering, cloud removal, fusion, and calculation were performed in the same way as above to obtain 12 bands, 9 vegetation indices, 3 PCA components, and 3 T-C transformation components. Finally, Sentinel-2 was formed as the input data for the RF model.
The FCN model requires 12 bands that are consistent with the above as model inputs, and a Scene Classification (SCL) map
\begin{table}
\begin{tabular}{l l l} \hline \hline Features & Describe & Reference \\ \hline NDVI & \(\frac{NIR-Red}{NIR+Red}\) & Carlson and Ripley \\ NDWI & \(\frac{Green-NIR}{Green+NIR}\) & McFEETERS (1996) \\ EVI & \(\frac{2.5-NIR-Red}{NIR+Red-7.3+Block+1}\) & [PERSON] et al. (2007) \\ SAVI & \(\frac{(1+0.5)*(NIR-Red)}{NIR+Red+0.5}\) & [PERSON] et al. (2010) \\ NBR & \(\frac{NIR-NIR}{NIR+NIR}\) & [PERSON] and Benson (1999) \\ NDMI & \(\frac{NIR-SWIR1}{NIR+SWIR1}\) & [PERSON] (1983) \\ RVI & \(\frac{NIR}{Red}\) & [PERSON] et al. (1995) \\ DVI & \(NIR-Red\) & [PERSON] et al. (1977) \\ NDSI & \(\frac{Green-SWIR1}{Green+SWIR1}\) & [PERSON] and Appel (2004) \\ \hline \hline \end{tabular}
\end{table}
Table 1: The indices used in Landsat 8.
was used for assistance. After sampling all 13 channels to a resolution of 10 meters on GEE, the required values for the FCN model input were obtained through image filtering, cloud removal, and fusion processing.
### CHM based on RF
#### 4.2.1 Dataset
When constructing an CHM based on RF, we used two datasets consisting of GEDI and different remote sensing data. Taking GEDI and Landsat 8 as examples, we use pyGEDI to convert preprocessed GEDI footprints into a 25m resolution canopy height raster image, then match each canopy height with the nearest Landsat 8 pixel and store them in a new channel, and divide pixels with canopy height into training and validation sets in an 8:2 ratio. The method of creating datasets for GEDI and Sentinel-2 can be extrapolated in this way.
The canopy height data obtained from GEDI is predominantly clustered around the median value, with limited representation in the lower and higher ranges. This imbalance dataset could potentially bias the model towards capturing the characteristic features of the majority class, neglecting those of the minority class, consequently impacting the overall predictive performance. To alleviate this problem, we resampled the training set of the RF. The canopy height was divided into 20 categories at intervals of 2 meters. Classes with few samples in each interval were oversampled by cloning samples, while classes with numerous samples were undersampled by randomly removing samples. After balancing the dataset, the training set had approximately 4000 samples in each canopy height interval.
#### 4.2.2 Train
This study used scikit-learn to construct a RF regression model. Using the canopy height obtained from GEDI as the validation value, the optimal model is obtained by adjusting the number of decision trees, maximum tree growth depth, minimum sample size of leaves, and minimum sample size of branch nodes through triple cross validation. After optimizing the hyperparameters, the number of decision trees in S2 RF is 280, the maximum growth depth of the tree is 470, the minimum number of samples for leaves is 2, and the minimum number of samples for branch nodes is 1. The number of decision trees in L8 RF is 270, with a maximum growth depth of 300, the minimum number of samples for leaves is 2, and the minimum number of samples for branch nodes is 1.
### CHM based on FCN
#### 4.3.1 Dataset
The FCN model studied by [PERSON] et al. (2023) was trained using global Sentinel-2 images in 2020, with a different scope and time compared to this study. Therefore, we used GEDI and Sentinel-2 data to create the 2019 Dabie Mountains dataset and retrained the model.
Unlike the RF dataset, this dataset not only includes 12 bands and canopy heights of Sentinel-2, but also sample weights, SCL, cloud (CLD), latitude and longitude. CLD is used for cloud masks, and the larger the value, the greater the degree to which the pixel is covered by clouds. We performed cloud removal on Sentinel-2 in GEE, so in this study, all CLDs were assigned 0 values.
We centered each pixel with a canopy height and cropped the surrounding 15 \(\times\) 15 sized images to obtain pixel values of 12 bands and other attributes. Then, we used a softened version of inverse sample-frequency weighting to re-weight each sample ([PERSON] et al., 2023). The canopy height was divided into \(K\)bins at intervals of 1 meter, and the number of samples \(N_{k}\) in each \(K\) bin is calculated. Formula 1 was used to calculate the weight of samples in each \(K\) bin
\[q_{i}=\frac{\sqrt{1/N_{i}}}{\sum_{j=1}^{K}\sqrt{1/N_{j}}} \tag{1}\]
where, \(q_{i}\) is the weight of the samples in \(K\) bin which \(k=i\), \(K\) is the total number of \(K\) bins.
We save the above attributes for each sample, divide them into training and validation sets in an 8:2 ratio, and organize them into H5 files, consistent with the research format of [PERSON] et al. (2023) and others. This way, it can be read in without making any changes to the input part.
#### 4.3.2 Train
This model is based on the FCN architecture in [PERSON] et al. (2019). But in order to accelerate the deployment speed of the model, its size was reduced by setting the number of blocks to 8 and the number of filters for each block to 256. The input of this model is 12 bands of Sentinel-2 and cyclic encoded geographic coordinates, with a total of 15 channels. Its output is the height predicted by the model and its variance, which have the same spatial dimension as the input.
We used the Dabie Mountains dataset for model training using sparse supervision as in previous studies. Before the input data was passed into the convolutional layer, each channel was normalized to standard normal using the statistics of the training set. The canopy height used for calibration was normalized in the same way. The neural network was trained using the Adam optimizer over 51,000 iterations with a batch size of 1600. The base learning rate for model training was 0.0001, which decreased by a factor of 0.1 after 10,200 and 20,400 iterations, respectively. The model accuracy changes during the training are shown in Figure 3.
### Accuracy evaluation
We used different metrics to assess the accuracy of the CHM, including Root Mean Square Error (RMSE), Mean Absolute Error (MAE).
\[RMSE=\sqrt{\frac{1}{N}\sum_{i=1}^{N}(y_{i}-f(x_{i}))^{2}} \tag{2}\]
\[MAE=\frac{1}{N}\sum_{i=1}^{N}|y_{i}-f(x_{i})| \tag{3}\]
Figure 3: Change plot of the model accuracy.
where \(N\) represents the number of samples, \(y_{i}\) represents the ground truth values, and \(f(x_{i})\) represents the predicted values.
## 5 Results and analysis
### Results
The estimated forest canopy height maps are shown in Figure 4. The results show that the L8 RF predicts a maximum canopy height of 39.5 m and that the forests located in the west and dispersed have lower canopy heights than the dense forests in the east. The S2 RF predicts higher canopy heights in the central and eastern forests, with a maximum of 32.9 m. The S2 FCN-1 predicts higher canopy heights north and south of the centre, with a maximum of 38.0 m. The S2 FCN-5 predicts the highest 35.6 m canopy was located in the western centre. There was some variation in the predictions of high canopy among the four models. Consistently, each model predicted lower canopy heights in the eastern and western dispersed forests.
The results of the accuracy assessment of L8 RF, S2 RF, S2 FCN-1 and S2 FCN-5 using the canopy height obtained by GEDI as the observation are shown in Table 2, and the scatter plots are shown in Figure 5.
The results show that S2 RF has the lowest RSME and MAE of 6.931 m and 5.645 m, respectively. The predicted values of the L8 RF and S2 FCN-5 correlate relatively well with the observed values. The S2 FCN-5 has an RMSE of 6.0 m in the global experiments.
### Analysis
#### 5.2.1 Compare different data
In the model based on RF, the inputs of L8 RF and S2 RF come from different data sources, but their differences in accuracy assessment metrics are less obvious. The RMSE and MAE of S2 RF are slightly smaller than that of L8 RF.
With more bands and higher resolution than Landsat 8, Sentinel-2 might have been thought to be superior in canopy height prediction. However, this was not the case in the experimental results, and its characteristics did not lead to exceptionally good results. It probably because the GEDI resolution is closer to the former and RF is a pixel-based method, so the higher match between GEDI and Landsat 8 pixels aids in accurate estimation. Therefore, the close accuracy of the final L8 RF and S2 RF may be the result of the interaction of several factors such as the number of bands and resolution.
Although the accuracy of L8 RF and S2 RF is not much different, the canopy height estimated by S2 RF has a higher resolution. This will be an advantage of Sentinel-2 in CHM.
#### 5.2.2 Compare different models
S2 RF, S2 FCN-1 and S2 FCN-5 all use Sentinel-2 as the data source. Among them,
\begin{table}
\begin{tabular}{c c c} \hline Metrics & Model & Value \\ \hline \multirow{3}{*}{RMSE} & L8 RF & 6.945 \\ & S2 RF & 6.931 \\ & S2 FCN-5 & 7.185 \\ & S2 FCN-1 & 7.345 \\ \hline \multirow{3}{*}{MAE} & L8 RF & 5.650 \\ & S2 RF & 5.645 \\ & S2 FCN-5 & 5.760 \\ \cline{1-1} & S2 FCN-1 & 5.799 \\ \hline \end{tabular}
\end{table}
Table 2: Results of the accuracy assessment.
Figure 4: Forest canopy height map of the Daibie Mountains in 2019 using different CHMs.
S2 RF has better RMSE and MAE.
S2 RF is based on classical machine learning, which is pixel-based regression. It uses input features to construct the model. S2 FCN-1 is based on deep learning and learns new features autonomously. The better accuracy of S2 RF is most likely because we input features for RF that are more relevant to canopy height, while FCN does not learn those important features. This proves that the RF with reasonable features input is more accurate in canopy height estimation than a single FCN with no features extracted for the data source.
S2 FCN-1 and S2 FCN-5 use exactly the same input data for prediction and the overall framework of the model is consistent, but the results are different. The RMSE of S2 FCN-5 is lower, possibly due to the use of five FCN models weighted by the estimated aleatoric uncertainties to estimate canopy height in [PERSON] et al. (2023). The result of S2 FCN-5 is the fusion of the weighted average values of five models. This processing has been proven to help improve model accuracy. In this study, only one model was trained to estimate canopy height, which may result in some accuracy loss compared to combining multiple models. However, the poor accuracy of S2 FCN-5 in this study compared to the original article is due to the fact that the data used to train S2 FCN-5 came from the global Sentinel-2 imagery in 2020, whereas the inputs used for forecasting came from the Daibie Mountains in 2019. This proves that it is best to ensure that the data used to train the model and the data used for prediction come from the same source.
## 6 Discussion
We compared the impact of various remote sensing data and models on CHM, but there are areas that can be improved and continued to be explored.
In model training, the accuracy of S2 FCN-1, on which we generated the dataset and trained it, still requires improvement. Despite utilizing the dataset from the Daibie Mountains for training, the accuracy of S2 FCN-1 does not surpass that of S2 FCN-5. This can be done by weighing more variants of the FCN, etc. All four models compared in this study had poor estimates for both low and high canopy. These issues require further investigation.
In the comparative analysis, we used the same data source for the construction of S2 RF and S2 FCN-1, but due to the different inputs required for RF and FCN, the Sentinel-2 data had to be preprocessed differently. In this process, we have done as much as possible to make the operations correspond to each other. For example, FCN assigns weights to samples by canopy height, mitigating errors caused by imbalanced canopy heights in the dataset, so we resampled the imbalanced dataset in the preprocessing of constructing S2 RF to reduce adverse effects. Such an approach ensures, as far as possible, that the differences in the final results come from the different models and not from the input data. However, the impact of input on the results cannot be completely eliminated.
In future studies, the effects of other factors on CHM can be explored. For example, more comparisons and analyses can be done by incorporating multi-source remote sensing data and time-series features in CHM. Besides, choosing the best data and model in an experiment should not only focus on the final results, but also consider the purpose and requirements of the study, as well as equipment, time, and other issues. Our work can provide comparison and reference. However, the most appropriate data and model need to be judged by the researcher on a case-by-case basis.
## 7 Conclusion
In this study, the contribution of multi-source remote sensing data and methods in estimating forest canopy height was evaluated and compared using the Daibie Mountains as the study site. In the comparison of remote sensing data, there is little difference in the accuracy of CHM based on RF using Landsat 8 and Sentinel-2, although the use of the latter provides a higher spatial resolution of the predictions. In the comparison of regression models, RF is more accurate than FCN, demonstrating that RF with reasonable feature inputs could be more accurate than a complex FCN without fine-tuning or a simple FCN model. Further improvements in accuracy could be considered by weighing in more variants of the FCN with domain-specific training.
## References
* [PERSON] et al. (1995) [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], [PERSON], 1995. A review of vegetation indices. _Remote sensing reviews_, 13(1-2), 95-120.
* [PERSON] (2001) [PERSON], 2001. Random forests. _Machine learning_, 45, 5-32.
* [PERSON] and [PERSON] (1997) [PERSON], [PERSON], 1997. On the relation between NDVI, [PERSON] vegetation cover, and leaf area index. _Remote Sensing of Environment_, 62(3), 241-252.
* [PERSON] et al. (2023) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2023. Spaceborne LiDAR reveals the effectiveness of European Protected Areas in conserving forest height and vertical structure. _Communications Earth & Environment_, 4(1), 97.
* [PERSON] et al. (2014)
Figure 5: Scatter plots of predictions and GEDI observations.
[PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], [PERSON], 2021. Modelling lidar-derived estimates of forest attributes over space and time: A review of approaches and future trends. _Remote Sensing of Environment_, 260, 112477.
* [PERSON] et al. (2007) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2007. A review on reflective remote sensing and data assimilation techniques for enhanced agroecosystem modeling. _International Journal of Applied Earth Observation and Geoinformation_, 9(2), 165-193. Advances in airborne technologies and remote sensing of agro-cosystems.
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2020. The Global Ecosystem Dynamics Investigation: High-resolution laser ranging of the Earth's forests and topography. _Science of Remote Sensing_, 1, 100002.
* [PERSON] et al. (2024) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2024. Hy-TeC: a hybrid vision transformer model for high-resolution and large-scale mapping of canopy height. _Remote Sensing of Environment_, 302, 113945.
* [PERSON] and [PERSON] (2022) [PERSON], [PERSON], [PERSON], 2022. Mixed tropical forests canopy height mapping from spaceborne LiDAR GEDI and multi-sensor imagery using machine learning models. _Remote Sensing Applications: Society and Environment_, 27, 100817.
* [PERSON] et al. (2010) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2010. SWIR-based spectral indices for assessing nitrogen content in potato fields. _International Journal of Remote Sensing_, 31(19), 5127-5143.
* [PERSON] et al. (2022) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2022. Estimation of the canopy height model from multispectral satellite imagery with convolutional neural networks. _IEEE Access_, PP, 1-1.
* [PERSON] et al. (2023) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] et al., 2023. Using multi-platform LiDAR to guide the conservation of the world's largest temperate woodland. _Remote sensing of environment_, 296, 113745.
* [PERSON] and [PERSON] (1999) [PERSON], [PERSON], 1999. Measuring and remote sensing of burn severity. _Proceedings joint free science and conference and workshop_, 2, University of Idaho and International Association of Wildland Fire Moscow, ID, 284.
* [PERSON] and [PERSON] (1983) [PERSON], [PERSON], [PERSON], 1983. The influence of soil salinity, growth form, and leaf moisture on-the spectral radiance of. _Photogramm. Eng. Remote Sens_, 49, 77-83.
* [PERSON] et al. (2023) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2023. A high-resolution canopy height model of the Earth. _Nature Ecology & Evolution_, 7(11), 1778-1789.
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2019. Country-wide high-resolution vegetation height mapping with Sentinel-2. _Remote Sensing of Environment_, 233, 111347.
* [PERSON] et al. (2002) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2002. Lidar remote sensing of above-ground biomass in three biomes. _Global Ecology and Biogeography_, 11(5), 393-399.
* [PERSON] et al. (2022) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2022. Analysis of the influence of different algorithms of GEDI L2A on the accuracy of ground elevation and forest canopy height. _Journal of University of Chinese Academy of Sciences_, 39(4), 502-511.
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON], [PERSON], 2015. Fully convolutional networks for semantic segmentation. _Proceedings of the IEEE conference on computer vision and pattern recognition_, 3431-3440.
* [PERSON] (1996) [PERSON], 1996. The use of the Normalized Difference Water Index (NDWI) in the delineation of open water features. _International Journal of Remote Sensing_, 17(7), 1425-1432.
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2021. Mapping global forest canopy height through integration of GEDI and Landsat data. _Remote Sensing of Environment_, 253, 112165.
* [PERSON] et al. (1977) [PERSON], [PERSON], [PERSON] et al., 1977. Distinguishing vegetation from soil background information. _Photogrammetric engineering and remote sensing_, 43(12), 1541-1552.
* [PERSON] and [PERSON] (2004) [PERSON], [PERSON], [PERSON], [PERSON], 2004. Estimating fractional snow cover from MODIS using the normalized difference snow index. _Remote sensing of environment_, 89(3), 351-360.
* [PERSON] et al. (2024) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] et al., 2024. Very high resolution canopy height maps from RGB imagery using self-supervised vision transformer and convolutional decoder trained on aerial lidar. _Remote Sensing of Environment_, 300, 113888.
* [PERSON] et al. (2023) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2023. The canopy extent and height change in Europe, 2001-2021, quantified using Landsat data archive. _Remote Sensing of Environment_, 298, 113797.
* [PERSON] et al. (2023) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2023. Hybrid model for estimating forest canopy heights using fused multimodal spaceborne LiDAR data and optical imagery. _International Journal of Applied Earth Observation and Geoinformation_, 122, 103431.
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], Mi, [PERSON], 2021. GLC_FCS30: global land-cover product with fine classification system at 30 m using time-series Landsat imagery. _Earth System Science Data_, 13(6), 2753-2776.
|
isprs
|
Estimating Forest Canopy Height based on GEDI Lidar Data and Multi-source Remote Sensing Images
|
Yuqi Lei, Yuanjia Wang, Guilong Wang, Chengwen Song, Hui Cao, Wen Xiao
|
https://doi.org/10.5194/isprs-archives-xlviii-1-2024-297-2024
| 2,024
|
CC-BY
|
isprs/62592d35_aec0_4fc8_8e1c_21a490a65971.md
|
# AI in Support to Water Quality Monitoring
[PERSON]
Corresponding author
[PERSON]
2 Department of Civil and Environmental Engineering, Politecnico di Milano - Lecco Campus, Via Gaetano previati 1/c, 23900 Lecco, Italy - (carlondrea.biraghi, maryam.loftian, daniela.carrion, [EMAIL_ADDRESS]
[PERSON]
1 Department of Civil and Environmental Engineering, Politecnico di Milano - Lecco Campus, Via Gaetano previati 1/c, 23900 Lecco, Italy - (carlondrea.biraghi, maryam.loftian, daniela.carrion, [EMAIL_ADDRESS]
[PERSON]
1 Department of Civil and Environmental Engineering, Politecnico di Milano - Lecco Campus, Via Gaetano previati 1/c, 23900 Lecco, Italy - (carlondrea.biraghi, maryam.loftian, daniela.carrion, [EMAIL_ADDRESS]
###### Abstract
This study explores the possibility of using Artificial Intelligence (AI) as a means to support water monitoring. More precisely, it addresses the issue of the quality and reliability of Citizen Science data. The paper addresses the tools and data of the SIMILE (informative System for the Integrated Monitoring of Insubric Lakes and their Ecosystems) project in order to develop an open pre-filtering system for Volunteer Geographic Information (VGI) of lake water monitoring at the global scale. The goal is to automatically determine the presence of harmful phenomena (algae and foams) in the images uploaded by citizen scientists to reduce the time required for a manual check of the contributions. The task is challenging because of the heterogeneity of the data that consist in geodesed pictures taken without specific instructions. For this purpose, different tools and deep learning techniques have been tested (Clarfiaia platform, a Convolutional Neural Network (CNN), and an object detection algorithm called faster Region-based CNN (R-CNN). The original dataset composed by the observations of SIMILE - Lake Monitoring application, has been integrated with the results of both keyword and image searches on web engines (Google, Bing, etc) and crawling Flickr data. The performances of the different algorithms are presented for their capability of detecting the presence and correctly labelling the phenomenon together with some possible strategies to improving them in the future.
Footnote †: Corresponding author
## 1 Introduction
### SIMILE project and its context
Waterbodies play a key role in the mitigation of the impact of climate change. They also represent an essential resource available to billions of people for multiple uses. The importance of waterbodies is further highlighted by their inclusion in the sustainable development agenda (SDGs 6, 14, UN.org, 2019). Lake ecosystems are highly exposed to the consequences of global warming and the impact of human activities ([PERSON] et al., 2009; [PERSON], 2009). The quality of lake water needs to be preserved, and recent scientific and technological development could provide a significant support to the cause. SIMILE (Informative System for the Integrated Monitoring of Insubric Lakes and their Ecosystems) is a project that involves academia (Politecnico di Milano - Lecco Campus; Fondazione Politecnico; SUPSI - University of Applied Sciences and Arts of Southern Switzerland), research bodies (Water Research Institute - National Research Council) and institutions (Lombardy Region; Ticino Canton in Switzerland), which are cooperating partners in the preservation of water quality in the Lagano-, Maggiore- and Como lakes. The main strategy of this cooperation aims at integrating the existing monitoring protocols with data coming from recently developed geospatial tools and techniques, such as the processing of satellite images ([PERSON] et al., 2020), of high frequency _in situ_ sensors and also Citizen Science ([PERSON] et al., 2019).
SIMILE - Lake Monitoring ([PERSON] et al., 2020; [PERSON] et al., 2020) is a recently released cross-platform, open-source, mobile application, which has been developed to support the activities of the project related to Citizen Science (CS). The mobile application enables private citizens to share their observations of the lake environment through geo-referenced images of algae, foams and litter, and the measurements of water parameters (transparency, temperature, pH, etc.). In addition, it promotes water-related events, it features a glossary and also a set of useful links that help improving the user's knowledge and awareness of the lake ecosystem. Registration to the app is optional and most of its functionalities can be used without registering. This means that everyone - including amateur and even malevolent users - can contribute freely to the project. This clearly implies the possible occurrence of irrelevant or inappropriate content.
Even though the mobile application is designed to facilitate the upload of contributions by users, it is not the optimal tool for their subsequent management. In fact, it only contains an agile map view where all the observations and measurements are displayed with the same marker and one needs to open each observation in order to explore their content. A web application ([[https://simile.com.poom.imi.it/SimileWebAdministrator/faces/in](https://simile.com.poom.imi.it/SimileWebAdministrator/faces/in)]([https://simile.com.poom.imi.it/SimileWebAdministrator/faces/in](https://simile.com.poom.imi.it/SimileWebAdministrator/faces/in)) dex.xhtml) has been developed to allow the partners of the project, and ultimately also environmental agencies, to edit and better analyse the data uploaded through the app. Moreover, the web application offers the possibility to filter observations according to time, position and all the other attributes that the mobile application features. This is a powerful tool - complementary to the mobile application - that enriches the set of data available that environmental agencies can study. However, it may also be regarded as a burden and an additional task for the employees who will actually elaborate the data.
At the end of the project, the integration of new technologies (including the two apps) with the existing routines will be evaluated by the institutional partners. For this reason, the added value on the scientific level must be balanced with the additional workload weighing on the employees of the environmental agencies. As a consequence, the present study explores the development of a system for the pre-filtering of data which exploits Artificial Intelligence (AI) with the purpose of reducingthe need of a manual check of the observations performed by the employees, thus simplifying the general workflow.
### Integration of AI and CS
Until recently, Artificial Intelligence (AI) and Citizen Science (CS) were considered distinctly and were used independently ([PERSON] et al., 2020). Their integration, however, can be helpful in addressing some of the challenges existing in both areas. Indeed, CS can be the source that provides large amounts of labelled data in order to feed and train Machine Learning (ML) algorithms, whereas ML can support CS in automating the validation of data or in sustaining user participation by giving automatic feedback to the volunteers. Even though the inclusion of AI in CS projects is expanding to various areas such as astronomy ([PERSON] et al., 2014) and neuroscience ([PERSON] et al., 2019), the main focus of their cooperative exploitation has been on biodiversity studies ([PERSON] et al., 2020; [PERSON] et al., 2019; [PERSON] et al., 2020; [PERSON] et al., 2019).
This article aims at integrating ML in the SIMILE project in order to introduce an automatic identification and the validation of observations regarding water quality. More precisely, the presence of foams and algae is considered, as it is the most relevant and reported issue in the Insubmic lakes. The availability of studies on the use of AI in order to identify phenomena linked to water quality is limited, and the majority of them - to the best of our knowledge - is based on detecting litter on the water surface ([PERSON] et al., 2021; [PERSON] et al., 2020). AI is also used to detect harmful algal blooms (HAB) by analysing satellite images ([PERSON] et al., 2020). The final goal of the present study is to implement a tool for the detection of algae and foams in the images uploaded by the users of the SIMILE - Lake Monitoring mobile application. The findings of this research have been compared to those of a similar study ([PERSON] et al., 2018) which deals with the detection of HAB both from aerial and from ground surveys.
The following chapter details all the steps required to develop this tool, starting from a definition of the dataset needed. The preparation of the data that will feed the alternative ML algorithms presented will be subsequently assessed, then, in the final section of the chapter, their performance will be analysed.
## 2 Method
### Dataset acquisition
The initial dataset consisted of the images uploaded by the users of the SIMILE - Lake Monitoring mobile application, integrated with archive images received from the partners of the project. This dataset featured 35 images of algae, 32 of foams and 14 of clean water. Even though the images were very precise and context specific, the dataset was limited if compared to the one needed to train a ML model. In order to enlarge it, a web search has been carried out by exploiting search engines Google and Bing and by using as keywords the two phenomena (algae; foams), their synonyms (algal bloom and scum; froth and spume) and the corresponding Italian words (alphe e foriture algal; schinney). This manual research produced similar results both on Google and Bing, and it helped collect nearly one-hundred valid images for each phenomenon. As a means to acquire contents in other languages or unlabelled ones, a complementary search by image has been performed using 10 input images for each of the two phenomena on 5 search engines (Google, Bing, TinEye, StackPhoto, Shuttershock). Table 1 shows the number of valid images found using each input image on the search engines detailed above.
Thanks to this additional, manual search, 232 valid images of algae (73% from Google) and 49 valid images of foams (45% from Bing) have been collected. In very limited cases, images of algae have also been found by using images of foams as an input (5 images), and viceversa (2 images). A further research into the Flickr dataset has been performed, first by downloading all the images containing the hashtag \"algae\" or \"foams\", then by manually checking all the results. Through this method, 82 valid images of algae have been gathered out of a total of 1012 downloaded images, and 101 images of foams out of 4094 downloaded images.
Some recurring elements have been noticed among the images that portrayed neither foams nor algae: from now on these will be labelled \"false positives\". These elements generally trick the search engine as in some cases the images may look like the phenomena our study investigates. For instance, false positives for algae were landscapes with grass, water illies, clean water surrounded by trees and vegetation, paintings and red metal powder. False positives for foams, instead, were cloudy satellite images, clouds in general, ice floating on water, snowflakes, glasses with drops of water, stones, waves and sun reflected on water. As the results will show later, the occurrence of certain false positives in the search by image could give some anticipations about the behaviour of the algorithm. More precisely, images containing the above-mentioned elements will have a high probability of being detected as false positives, especially as far as foams are concerned.
On the one hand, this process has considerably increased the number of images at our disposal. On the other, it has introduced elements of disturbance. As a matter of fact, algae and foams are unstable objects without fixed dimensions, whose shape, extension, colour and appearance (i.e. compact, dispersed, linear, scattered) may vary. They can be found in the sea, in lakes and smaller basins, and in rivers. Pictures can be taken from different observation points and with various inclinations of the camera. The diversity found in this larger dataset of images could not have been found by looking at the SIMILE project context alone, which takes into consideration only the Insubric Lakes.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline & ID & Google & Bing & TinEye & Stack & Shutter \\ & & & & & Photo & shock \\ \hline \multirow{8}{*}{**SConsidering the global interest for these phenomena, including into the dataset a significant variety of their manifestations is clearly an added value for the pre-filtering system, because it helps increasing its usability beyond the framework of the SIMILE project.
### Data preparation
After downloading the images, several pre-processing operations needed to be performed in order to elaborate the final dataset. The dataset was used to train two different algorithms: a Convolutional neural network (CNN) and an object-detection algorithm called faster Region-based CNN (R-CNN). It is important to notice how the two algorithms - which will be detailed in the following section - partially required different pre-processing operations, as it will be explained below.
**Remove duplicate images**: Considering that the images were obtained from multiple sources, some of them appeared twice or even more times, but were labelled under different names. In order to remove the duplicates a script was implemented so as to obtain the pixel values of an image. Using a hash function called MD5 ([[https://en.wikipedia.org/wiki/MD5](https://en.wikipedia.org/wiki/MD5)]([https://en.wikipedia.org/wiki/MD5](https://en.wikipedia.org/wiki/MD5))), a hexadecimal code (an alphanumeric string such as 21933563820f9983 ad07 ac2e8 18a6b) was generated based on the image pixel value. The code generated was compared among the images: those with an identical value were considered duplicates, then the exceeding copies were removed from the dataset.
**Label the images**: When using the CNN algorithm, the images could be put in a folder labelled with the name of the phenomenon in order to generate the labels from the structure of the directory. This has been done by using the flow_from_directory function in Keras ([[https://keras.io/api/preprocessing/image/](https://keras.io/api/preprocessing/image/)]([https://keras.io/api/preprocessing/image/](https://keras.io/api/preprocessing/image/))). In order to label the images using the object-detection algorithm, the phenomenon needed to be identified within each image by drawing one or more bounding boxes around it, and assigning a label corresponding to each phenomenon (algae or foam). In order to perform this action, a tool for image annotation written in Python, called Labelling ([[https://github.com/taulia/labelling](https://github.com/taulia/labelling)]([https://github.com/taulia/labelling](https://github.com/taulia/labelling))), was used. The output consisted of the image and a homonymous XML file (Extensible Markup Language) containing the coordinates of the bounding boxes and the label name. The labelled images were then uploaded on a platform called Roboflow ([[https://roboflow.com/](https://roboflow.com/)]([https://roboflow.com/](https://roboflow.com/))), which helped performing the following processing steps, needed only for the object-detection algorithm.
**Augment the images**: Once the images along with their XML files were added to Roboflow, it was possible to verify whether all of them were labelled, and to control and modify the location of bounding boxes in case they were outside the border of the image. In order to increase the amount of input data two augmentation steps were applied to all images. A horizontal flip and an image rotation of \(\pm\)15 produced two augmented versions for each image. Figure 1 shows a sample of the dataset following data augmentation.
**Export to the required output format**: Once the dataset was ready, it could be exported to the format needed for the model. Roboflow provides the opportunity to choose the output format based on the algorithm one wishes to train. Then one can either download the dataset on a local machine or get the URL of the data. The format required in this study was TFRecord, a TensorFlow format that stores the sequence of binary records ([[https://www.tensorflow.org/tutorials/load_data/tfecord](https://www.tensorflow.org/tutorials/load_data/tfecord)]([https://www.tensorflow.org/tutorials/load_data/tfecord](https://www.tensorflow.org/tutorials/load_data/tfecord))).
### Tools and approaches
In order to perform an automatic identification of water quality phenomena (here algae and foams), three approaches were considered, as explained below.
**Clarifai**: Clarifai ([[https://www.clarifai.com/](https://www.clarifai.com/)]([https://www.clarifai.com/](https://www.clarifai.com/))) is an AI platform for computer vision, natural language processing and automatic speech recognition. It offers pre-trained models as well as the opportunity of training a model using a custom dataset. The services can be used through Clarifai API, which has a high-speed response return and can be integrated in AI-based mobile or web applications. Our first attempt was done by building a custom model using the initial dataset of 60 images of both foam and algae classes. Because of limitations concerning the number of free API calls (1000 free calls), the black box regarding the structure and parameters of the model and the peculiarity of the case studied, it was decided to construct and train a new CNN custom model. It is important to note that the pre-trained models available on Clarifai ([[https://www.clarifai.com/developers/pre-trained-models](https://www.clarifai.com/developers/pre-trained-models)]([https://www.clarifai.com/developers/pre-trained-models](https://www.clarifai.com/developers/pre-trained-models))) are very efficient in the detection of objects in images. The pre-trained models include a general model detecting a variety of objects, as well as more specific models such as face recognition and the detection of humans and cars, to name a few. Even though this initial attempt with Clarifai was characterised by the availability of a limited dataset (30 images for algae and 27 for foams) and other limitations, it showed promising results that encouraged us to continue with a more detailed approach.
**Convolutional neural network (CNN):** CNN is a deep learning algorithm which initially has been used to study the brain's visual cortex and is now widely used to identify patterns for image processing and sound recognition ([PERSON] et al., 2018). Thanks to the availability of large amounts of data, as well as to the recent advances made in computing power, CNN models have achieved a high level of performance in the identification of patterns in complicated visual tasks, which sometimes is superior to the abilities of a human being ([PERSON] et al., 2017). When using a CNN model, a kernel (filter) functioning as a moving window is
Figure 1: Sample of dataset images after data augmentation
applied on the image pixels in order to identify and extract the various features in the image (e.g. edges). The convolutional layers (one or more) are responsible for capturing different features with varying levels of detail. By analysing a variety of layers, the algorithm can ultimately achieve a full understanding of the image taken into consideration. In order to extract dominant features and to downsample the images, a further pooling layer is added, which similar to the convolutional layer the kernel moves within the image and it returns a new set of pixel values depending on the type of kernel. There are two types of pooling - maximum pooling and average pooling - that return respectively the maximum and the average values found by the kernel ([PERSON] and [PERSON] 2015). Finally, the fully connected layer performs the duty of the traditional Artificial Neural Network (ANN with input layer, hidden layers, and the outer layer) and conducts the final classification and scoring. Figure 2 illustrates a simple architecture of a CNN model.
In line with the purposes of the present study, a CNN model composed of three hidden convolutional layers and a fully connected layer has been trained using a two-dimensional max pooling. The fully connected layer also featured the ReLU activation ([[https://en.wikipedia.org/wiki/Rectifier_](https://en.wikipedia.org/wiki/Rectifier_)]([https://en.wikipedia.org/wiki/Rectifier_](https://en.wikipedia.org/wiki/Rectifier_))(_neural_networks)). A summary of the specific architecture of the model used is shown in Figure 3, which illustrates that each of the tree convolution layers used is followed by a pooling layer. The initial input image was resized to 150x150 pixels and, as Figure 3 clearly shows, its size was reduced gradually as a consequence both of the convolution and of the pooling, up to reaching a final size of 17x17 pixels. The output of the final pooling layer is flattened in order to produce a unidimensional vector of all the values that feed the fully connected layers (see ANN, the classification part as shown in Figure 2). The fully connected layer has 512 neurons, and since we are doing a binary classification (1: algae, 0: foam), the size of the output is 1.
The first training of the model has been done by using uniquely the observations from the SIMILE mobile application, whereas the second training has exploited the extended dataset (755 images equally distributed between two classes). 90% of the data was used for training purposes and 10% to validate the performance of the model, taking into consideration validation accuracy and validation loss. TensorFlow ([[https://www.tensorflow.org/](https://www.tensorflow.org/)]([https://www.tensorflow.org/](https://www.tensorflow.org/))) and Keras ([[https://keras.io](https://keras.io)]([https://keras.io](https://keras.io))) were used in order to build, train and evaluate the model, whereas the computation was done on the Google Collaboratory platform ([[https://colab.research.google.com/](https://colab.research.google.com/)]([https://colab.research.google.com/](https://colab.research.google.com/))), which allows free GPU access. Because of the heterogeneity of the phenomena analysed - variation in texture and colour, presence of false positives - it may be complex for the model to understand which type of phenomenon it is observing, especially if the dataset of images is limited. Moreover, CNN exclusively allowed to predict whether a particular image represented an algae or a foam. The model did not give any information on the location of the phenomenon within the image, neither on the possible presence of both phenomena in a single image. As a consequence, it was decided to train an object detection model which uses CNN to predict both the phenomenon and its precise location in an image.
**Object detection**: While image classification models work on the probability of an object to be present in an image, object detection algorithms predict the presence of objects as well as their location in an image (e.g. as bounding boxes). Object detection algorithms typically work by proposing the potential regions in an image where an object may exist, then by classifying the regions in connection with the object(s) of interest. One of the known object detection algorithms is R-CNN (Region-based CNN) developed by [PERSON] et al. (2014). This approach uses an algorithm for image segmentation called \"selective search\" in order to determine the possible regions where an object may be located (approximately 2000 region proposals per image). The regions are then passed to a CNN model which generates a feature vector from each region proposal. Finally, a support vector machine (SVM) model performs a classification of the objects found and identifies the location of the objects in the image. Training this kind of model is computationally expensive and the test data take a long time to be predicted (approximately 49 seconds per image). For this reason, an enhancement of the algorithm - called fast R-CNN ([PERSON], 2015) - was proposed in 2015. Fast R-CNN uses the same approach to determine the regions where an object may be located, then all region proposals are passed to CNN as one single input, rather than sending them one by one as with R-CNN. The performance of the model is improved in that the training and detection time are significantly reduced to about 2 seconds per image. Still, fast R-CNN proves to be computationally expensive as it uses a traditional image segmentation algorithm to propose regions (i.e. selective search algorithm). As a consequence, a yet additional version of R-CNN was proposed - faster R-CNN ([PERSON] et al., 2015) - which used convolutional networks to propose regions, rather than an algorithm of external region proposal. Faster R-CNN requires both less training time and less time to detect test images (0.2 seconds per image), which makes it suitable for integration in applications of real-time object detection. For the purposes of the present study a customised faster R-CNN algorithm was trained using TensorFlow object detection API, which offers pre-trained weights. The API offers pre-trained models on a COCO (common objects in context) dataset which can be used and adapted on customised data. The \"faster_renn_inception_v2\" pre-trained model was used, and a training using 1000, 10'000, and 20'000 steps was performed, comparing computation time and model performance for each run. In order to evaluate the performance of the model, average recall and mean average precision (mAP) metrics were considered. Here follows a textual and visual (Figure 4) summary of the steps performed to train the custom faster R-CNN model:
Figure 3: Architecture of the implemented CNN model
Figure 2: Representation of the Architecture of a simple CNN ([PERSON], 2020)
1. perform image annotation by drawing bounding boxes around the object of interest in the image and label them with _algae or foams_ tag;
2. apply data augmentation (e.g. image rotation, horizontal or vertical flip, etc.) to address the lack of input images;
3. finalise the dataset and extract the required data format for the model (in our case: TFRecords format);
4. select and configure the pre-trained model;
5. train the model using the updated dataset and monitor loss, mAP and recall while training;
6. evaluate model performance and adjust its parameters;
7. observe the results of predicted bounding boxes on the test dataset.
## 3 Results and Discussion
### Model performance
**CNN**: The results obtained by using CNN show that the model has learned relatively well on the training dataset. However, in the validation dataset we can observe some noises in the accuracy with values fluctuating between over 90% and 50% (Figure 5). Although after epoch 120 we observe more stability in the validation accuracy/loss, due to the observed gap between training and validation accuracy/loss it can be concluded that the model is suffering from high variance and thus overfitting. This behaviour could be due to the relatively small amount of data used to train the CNN or to the heterogeneity in the dataset. However, in order to have a better understanding of the cause of this fluctuation, changes in different parameters need to be tested in the future. A solution for the limited size of the dataset could be found by using transfer learning, that is, by using the weights of the pre-trained models on large datasets. The present study has not tested this solution, so it remains as a hint for future investigations.
Figure 4: Training steps of R-CNN model
Figure 5: CNN performance. x axis: number of epochs, y axis: Top: Accuracy; Bottom: Loss.
**Object detection**: the three runs were compared with regard to model performance in order to consider the ability of the model to classify the object and to define its location correctly. A possible index for model evaluation is a measurement of the overlap between the predicted bounding box and the ground truth bounding box, which is called IoU (Intersection over Union, see Figure 6).
Therefore, IoU can be set as a threshold in order to consider whether a prediction is true or whether it is a false positive. For instance, setting IoU threshold to 0.5, if the IoU of the prediction is above this threshold, it is considered as true positive, if below it is considered as a not precise detection. The IoU threshold value was set to 0.5, then the average recall and mAP were compared.
The results shown in Table 2 highlight that with 20'000 steps the model performs better with a higher mAP. Even though the results of the recall performed with 1'000 steps are closer to those performed with 20'000 steps - and also higher than the recall with 10'000 steps - the mAP increased as we trained the model with more steps. That is to say, the model learns early to predict the phenomena, but the location of detected phenomena improves by increasing the number of steps. The results can be compared with the few existing studies on object detection of HAB ([PERSON] and [PERSON], 2017; [PERSON] et al., 2018). Even though the recall and mAP for faster R-CNN are lower than what they achieved, considering that the object of identification were two phenomena, and also that the dataset was heterogeneous and small, the results are still promising for the detection of more than one phenomenon, given that the model is trained on a larger dataset. The results of predicted classes and bounding boxes on some of the test set images (using the model trained with 20'000 steps) are presented in support to the discussion of model performance (Figures 7-11). The model performs better in the detection of algae than foams. Several cases of false positives have been detected in which clouds, stones or light reflected on water were identified as foams. However, the positive aspect is that no false negative was detected for foams, and all the false positive bounding boxes featured in images that included also actual foams (Figure 8) or algae (Figure 9). It is interesting to note that no cases of false positives for algae were detected. However, there were cases where algae were not detected (n. 3 false negative), especially in those images where the phenomenon was particularly extended and covered the whole image. In those cases, a contrast with clean water or other elements was not available (Figure 10). The testing of the model with images of clean water showed that in no image the presence of algae was predicted (no false positives). On the contrary, as far as foams are concerned, most of the images containing the reflection of light on clean water were predicted as false positives, as expected (Figure 11). In conclusion, the model works well with algae, but it needs more investigation regarding foams. The model correctly predicts foams when foam is in the image, but it also predicts many false positives.
### Conclusions
Advances in ML and particularly in computer vision have resulted in many studies that integrate such techniques in their research. CS is among the areas whose integration with ML techniques has received attention in recent years. This article discussed how to integrate ML in a CS application focused on detecting harmful phenomena (i.e. algae and foams) that affect the quality of lake water. Various algorithms have been tested in order to address the aim of an automatic identification of algae and foams in images collected by citizen scientists. Among the various approaches tested, the faster R-CNN object detection algorithm proved to be more suitable, with a performance closer to what other, similar studies have achieved.
However, this research presents some aspects that would benefit from further investigation. For example, one could try the proposed approach using a larger dataset, as the one used here has proved to be too small when considering the complexity and peculiarity of the issue that this study investigated. A larger dataset could also allow for the introduction of more specific tags in order to better distinguish the phenomena analysed and their various manifestations (e.g., compact, linear or scattered algae or foams), hopefully reducing the number of false positives.
In addition to this, among the available algorithms for object detection, only faster R-CNN was trained. However, other algorithms such as SSD (Single Shot Detector), YOLO (You Only Look Once) or R-FCN (Region-based Fully Convolutional Networks) could be trained in order to have a clearer understanding of which performs best in the detection of harmful phenomena in lake water. Finally, another important aspect that can be explored is that of performing predictions considering the location of an image, and not only its content, in such a way that if a model predicts algae or foam in an area with a low probability of occurrence for such phenomena, this will negatively affect the reliability of the prediction.
At the current stage of development this tool may support environmental agencies, but it still presents some aspects that need to be improved, so it cannot fully substitute a manual check of total accuracy in the manifestation of these phenomena is required.
## Acknowledgements
The research described in this paper is part of SIMILE project (ID: 523544), which has been funded with the support from the European Commission within the Interreg Italy-Switzerland 2014-2021 programme.
## References
* [PERSON] et al. (2009) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (2009). Lakes as sentinels of climate change. Limnology and Oceanography.[[https://doi.org/10.4319/lo.2009.54.6_part_2.228](https://doi.org/10.4319/lo.2009.54.6_part_2.228)]([https://doi.org/10.4319/lo.2009.54.6_part_2.228](https://doi.org/10.4319/lo.2009.54.6_part_2.228))
* [PERSON] et al. (2018) [PERSON], [PERSON], & [PERSON] (2018). Understanding of a convolutional neural network. Proceedings of 2017 International Conference on Engineering and Technology, ICET 2017, 2018-Janua, 1-6. [[https://doi.org/10.1109/ICEngTechnol.2017.8308186](https://doi.org/10.1109/ICEngTechnol.2017.8308186)]([https://doi.org/10.1109/ICEngTechnol.2017.8308186](https://doi.org/10.1109/ICEngTechnol.2017.8308186))
* [PERSON] (2020) [PERSON] (2020). _Binary Image classifier CNN using TensorFlow_. [[https://medium.com/techiepedia/binary-image-classifier-cnn-using-tensorflow-a3T5](https://medium.com/techiepedia/binary-image-classifier-cnn-using-tensorflow-a3T5) dc6746697]([https://medium.com/techiepedia/binary-image-classifier-cnn-using-tensorflow-a3T5](https://medium.com/techiepedia/binary-image-classifier-cnn-using-tensorflow-a3T5) dc6746697)
* ISPRS Archives, 43(B4), 237-244. [[https://doi.org/10.5194/isprs-archives-XLIII-B4-2020-237-2020](https://doi.org/10.5194/isprs-archives-XLIII-B4-2020-237-2020)]([https://doi.org/10.5194/isprs-archives-XLIII-B4-2020-237-2020](https://doi.org/10.5194/isprs-archives-XLIII-B4-2020-237-2020))
* International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLII-4/W204(W20), 3-10. [[https://doi.org/10.5194/isprs-archives-XLII-4-W20-3-2019](https://doi.org/10.5194/isprs-archives-XLII-4-W20-3-2019)]([https://doi.org/10.5194/isprs-archives-XLII-4-W20-3-2019](https://doi.org/10.5194/isprs-archives-XLII-4-W20-3-2019))
* [PERSON] et al. (2021) [PERSON], [PERSON] [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], [PERSON], & [PERSON] (2021). Automatic detection and quantification of floating marine macro-liter in aerial images: Introducing a novel deep learning approach connected to a web application in R. Environmental Pollution, 273. [[https://doi.org/10.1016/j.envpol.2021.116490](https://doi.org/10.1016/j.envpol.2021.116490)]([https://doi.org/10.1016/j.envpol.2021.116490](https://doi.org/10.1016/j.envpol.2021.116490))
* [PERSON] (2015) [PERSON] (2015). Fast R-CNN. Retrieved April 17, 2021, from [[https://github.com/rbgirshick/](https://github.com/rbgirshick/)]([https://github.com/rbgirshick/](https://github.com/rbgirshick/))
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON], & [PERSON] (2014). Rich feature hierarchies for accurate object detection and semantic segmentation Tech report (v5). Retrieved April 17, 2021, from [[http://www.cs.berkeley.edu/~rbg/rcnn](http://www.cs.berkeley.edu/~rbg/rcnn)]([http://www.cs.berkeley.edu/~rbg/rcnn](http://www.cs.berkeley.edu/~rbg/rcnn)).
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (2020). Innovations in Camera Trapping Technology and Approaches: The Integration of Citizen Science and Artificial Intelligence. Animals, 10(1). [[https://doi.org/10.3390/ani10010132](https://doi.org/10.3390/ani10010132)]([https://doi.org/10.3390/ani10010132](https://doi.org/10.3390/ani10010132))
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (2017). Recent Advances in Convolutional Neural Networks.
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], & [PERSON] (2020). HABNet: Machine Learning, Remote Sensing Based Detection of Harmful Algal Blooms. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. [[https://doi.org/10.1109/JSTARS.2020.3001445](https://doi.org/10.1109/JSTARS.2020.3001445)]([https://doi.org/10.1109/JSTARS.2020.3001445](https://doi.org/10.1109/JSTARS.2020.3001445))
* [PERSON] & [PERSON] (2017) [PERSON], & [PERSON] (2017). A Deep Learning Paradigm for Detection of Harmful Algal Blooms.
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (2019). Auto-filtering validation in citizen science biodiversity
Figure 11: False positives for foams on clean watermonitoring: a case study. Proceedings of the ICA, 2, 1-5. [[https://doi.org/10.5194/ica-proc-2-78-2019](https://doi.org/10.5194/ica-proc-2-78-2019)]([https://doi.org/10.5194/ica-proc-2-78-2019](https://doi.org/10.5194/ica-proc-2-78-2019))
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (2020). Satellite Monitoring system of Subalpine lakes with open source software: the case of SIMILE project. Baltic J. Modern Computing, Vol. xx, N, 1-3.
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (2020). Artificial Intelligence Meets Citizen Science to Supercharge Ecological Monitoring. Patterns, 1(7), 100109. [[https://doi.org/10.1016j.patter.2020.10019](https://doi.org/10.1016j.patter.2020.10019)]([https://doi.org/10.1016j.patter.2020.10019](https://doi.org/10.1016j.patter.2020.10019))
* International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences.
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON], & [PERSON] (2015). Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. [[http://image-net.org/challenges/LSVRC/2015/results](http://image-net.org/challenges/LSVRC/2015/results)]([http://image-net.org/challenges/LSVRC/2015/results](http://image-net.org/challenges/LSVRC/2015/results))
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], & [PERSON] (2018). Algae Detection Using Computer Vision and Deep Learning.
* [PERSON] et al. (2020) [PERSON], [PERSON], & [PERSON] (2020). Thinking like a naturalist: Enhancing computer vision of citizen science images by harnessing contextual data. Methods in Ecology and Evolution, 11(2), 303-315. [[https://doi.org/https://doi.org/10.1111/2041-210X.13335](https://doi.org/https://doi.org/10.1111/2041-210X.13335)]([https://doi.org/https://doi.org/10.1111/2041-210X.13335](https://doi.org/https://doi.org/10.1111/2041-210X.13335))
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (2019). A comparison of deep learning and citizen science techniques for counting wildlife in aerial survey images. Methods in Ecology and Evolution, 10(6), 779-787. [[https://doi.org/https://doi.org/10.1111/2041-210X.13165](https://doi.org/https://doi.org/10.1111/2041-210X.13165)]([https://doi.org/https://doi.org/10.1111/2041-210X.13165](https://doi.org/https://doi.org/10.1111/2041-210X.13165))
* United Nations Sustainable Development. [[https://www.un.org/sustainabledevelopment/sustainable-development-goals/](https://www.un.org/sustainabledevelopment/sustainable-development-goals/)]([https://www.un.org/sustainabledevelopment/sustainable-development-goals/](https://www.un.org/sustainabledevelopment/sustainable-development-goals/))
* [PERSON] (2009) [PERSON] (2009). Effects of Climate Change on Lakes. In Encyclopedia of Inland Waters. [[https://doi.org/10.1016/B978-012370626-3.00233-7](https://doi.org/10.1016/B978-012370626-3.00233-7)]([https://doi.org/10.1016/B978-012370626-3.00233-7](https://doi.org/10.1016/B978-012370626-3.00233-7))
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (2020). Machine learning for aquatic plastic litter detection, classification and quantification (APLASTIC-Q). Environ. Res. Lett, 15, 114042. [[https://doi.org/10.1088/1748-9326/abbd01](https://doi.org/10.1088/1748-9326/abbd01)]([https://doi.org/10.1088/1748-9326/abbd01](https://doi.org/10.1088/1748-9326/abbd01))
|
isprs
|
AI IN SUPPORT TO WATER QUALITY MONITORING
|
C. A. Biraghi, M. Lotfian, D. Carrion, M. A. Brovelli
|
https://doi.org/10.5194/isprs-archives-xliii-b4-2021-167-2021
| 2,021
|
CC-BY
|
isprs/c495f23c_5c71_44b3_be3f_9f36a42a1391.md
|
# Developing Urban Metrics to Describe the Morphology
of Urban Areas at Block Level
[PERSON]. Canters
Cartography and GIS Research Group, Department of Geography, Vrije Universiteit Brussel,
Pleinlaan 2, 1050 Brussel, Belgium - (svdhaege, fcanters)@vub.ac.be
###### Abstract
This research focuses on the potential of spatial metrics for describing distinct types of urban morphology at block level. Urban blocks typically consist of built-up and non built-up areas, with a specific composition and configuration. To characterize the two-dimensional structure of urban blocks, next to traditional, landscape ecological metrics, two alternative methods are proposed, describing the alternation between, and the characteristic size of built-up and non-built-up surfaces along a set of radial and contour-based profiles. Urban areas are, of course, also characterized by their third dimension. Therefore, also metrics were developed describing different characteristics of the elevation pattern of built-up areas. A case study was carried out on the Brussels-Capital region. Several vector layers of the large-scale UrbIS database for the region were used to define the blocks, the delineation of individual buildings within each block, and the number of floors for each building. High-resolution satellite data were used to define the presence of green in non-built areas. A combination of the metrics proposed shows clear potential for describing different types of urban morphology. Yet there are several issues that require further research, such as the relation between different types of urban morphology and urban land use, as well as the potential of additional data (satellite imagery, digital elevation models, socio-economical data) for improving the distinction between different urban morphologies and/or land-use types.
urban morphology, urban block, spatial metrics, remote sensing, Brussels Capital Region
## 1 Introduction
Approximately half of the world population is living in urbanized areas, and that number is about to rise in the next decades. Recent developments in population growth, patterns of urban migration and increasing ecological problems emphasize the need for an efficient and sustainable use of urban areas (E.C. Environment DG, 2004). To this end, comprehensive knowledge about the causes, chronology and effects of urban dynamics is required ([PERSON] et al., 2002).
Urban growth models offer the possibility to predict future urban growth, and adapt urban policies based on the outcome of predefined development scenarios. The results and the applicability of these models strongly depend on the quality and scope of the data available for parameterization, calibration and validation ([PERSON] et al., 2005). In order to model urban dynamics, time-series of detailed land-use data is required. Such data is usually obtained by visual interpretation of aerial photography or high-resolution satellite data. Visual interpretation of high-resolution (remote sensing) imagery, however, is time-consuming. This makes calibration of land-use change models, which often work with 1-year time steps, rather difficult. Visual interpretation is also a subjective process, which may lead to inconsistencies in land-use maps available for different periods. Also this hampers the use of such data in model calibration. These obstacles call for a formalisation of the land-use interpretation process and for the development of (semi-) automatic approaches for inferring urban form and function from structural characteristics of the built-up area ([PERSON] and [PERSON], 2000).
The characteristic morphology of urban land use, resulting in a specific alteration of different types of urban land cover that can be described by the size and shape of objects, their relative area share and their spatial configuration, may allow distinguishing between different land-use classes. In the last two decades, several (semi-) automatic mapping approaches, which use structural and contextual information to identify different types of urban land use, have been developed ([PERSON] and [PERSON], 1997; [PERSON] and [PERSON], 2000; [PERSON] and [PERSON], 1990; [PERSON] et al. 1992; [PERSON] et al., 2003; [PERSON], 1990; [PERSON], 1996). Some researchers proposed to describe urban morphology or urban land use by means of spatial metrics ([PERSON] et al., 2002; [PERSON] et al., 2009; [PERSON] and [PERSON], 2005). Although the use of traditional metrics originating from the field of landscape ecology has been proven promising for analyzing urban areas, these metrics are not necessarily optimal for describing different types of urban morphology ([PERSON] et al., 2005).
This paper focuses on the potential of spatial metrics for describing different types of urban morphology by using a set of traditional indices originating from the field of landscape ecology, in combination with a set of newly proposed metrics, which attempt to describe specific characteristics of urban structure at the level of urban blocks in a more explicit way. Making use of large-scale vector data, defining building areas within the Brussels Capital Region, the set of proposed metrics will be computed within urban blocks, which are expected to exhibit a certain degree of homogeneity in terms of urban morphology. The different types of metrics proposed will be tested on their potential to distinguish different types of urban morphology.
## 2 Spatial Metrics
Landscapes can be defined by looking at the relation between different landscape components (patches), and can be characterized by the composition and the spatial configuration of these components ([PERSON], 1988). Landscape compositionrefers to properties concerning the presence and the proportion of each patch type, i.e. landscape class, without explicitly describing its spatial features. Landscape configuration on the other hand, refers to the physical distribution and the spatial features of the patches present within the landscape.
The definition and delineation of spatial entities, for which spatial metrics are computed, is an important issue. Starting from a rasterized version of a land-cover map, either a regular window or a region-based approach can be used for the calculation of metrics. The conceptual simplicity, as well as the ease of implementation are clear advantages of regular window based approaches. A disadvantage, however, even when working with varying window sizes ([PERSON] and [PERSON], 2000; [PERSON], 1996), is that spatial pattern is described for artificial square areas, while landscape units usually have irregular boundaries. This leads to border effects in the calculation of the metrics. In the region-based approach metrics are computed at the level of meaningful spatial entities, which may be derived from other data sets (administrative units, urban blocks, patches with homogeneous land-use characteristics, etc.). Although this involves working with irregular landscape units, the region-based approach has some clear advantages compared to grid-based or moving-window based approaches ([PERSON] and [PERSON], 1997; [PERSON] et al., 2005).
Depending on the objectives of the research, and the characteristics of the landscape that are to be taken into account, different types of spatial metrics have been proposed ([PERSON] and [PERSON], 2000; [PERSON] et al. 1997; [PERSON] et al., 2003; [PERSON] and [PERSON] 2001; [PERSON] et al., 2009; [PERSON] and [PERSON], 2005). Because of the clear difference in structure of urban areas compared to natural (and semi-natural) landscapes, and because of the different nature of processes that occur in both types of landscapes, there is a need for developing new metrics that are able to capture the specificity of urban structures and processes ([PERSON] et al., 2005). This paper presents two alternative methods for describing the two dimensional composition and spatial configuration of urban blocks, based on the distinction between building and non-building areas. Furthermore, also metrics that describe the vertical component of urban structure (height) are proposed.
## 3 Study Area, Data and Preprocessing of Data
To analyze the potential of spatial metrics for distinguishing between different types of urban morphology, vector data defining the outline of building structures for the Brussels Capital Region were used. These data were extracted from the large-scale reference database UrbIS (Brussels Information System), made available by the Centre for Informatics of the Brussels Region (CIBG). A high resolution remote sensing image (Ikonos, 08/06/2000) was used to derive information concerning the presence of green areas, using an NDVI threshold of 0,29. The Brussels Capital Region is one of Belgium's three regions (figure 1), covers an area of 161 km2, and counts 1,03 million inhabitants, which results in an average population density of more than 6000 inhabitants per square kilometre.
UrbIS constitutes a collection of vector data describing the built-up environment and associated attribute data for the objects included in the different vector layers. The data layer 'Building' was used to define built-up structures. An additional layer, which delineates urban blocks, enclosed by streets, railroads and/or waterways, was used to define the spatial units for which spatial metrics were calculated. The study area consists of 4628 urban blocks, with an average block size of 0.75 ha). Most blocks show a relatively high degree of homogeneity in terms of morphological characteristics.
Closer inspection of the UrbIS building layer reveals the presence of many small building structures that do not significantly contribute to the characteristic form of the built-up area within individual blocks (figure 2.a). While visual interpretation of built-up morphology automatically involves a process of generalisation, focusing on major features of building arrangement, quantitative characterization of urban morphology, using spatial metrics, takes all building structures into account. This holds the risk that metric values are overly influenced by the presence of small building structures that do not substantially contribute to the overall characteristics of the block, and that may have a negative impact on the potential of the measures to describe the typical morphology of a block. It may also reduce the ability of the metrics to distinguish betweeen urban blocks with distinct morphological properties. In order to minimize this impact on the derivation of spatial metrics, a simple method was implemented to filter out small building structures prior to metrics calculation.
Small built-up structures are not taken into account if two conditions are fulfilled: the relative difference in area between meaningful structures and less significant structures is substantial (1), the structures to be filtered out only occupy a minor part of the total building area (2). Starting from these two assumptions, all building structures are ranked in descending order, according to area. Then, the largest area ratio between two successive units is determined, i.e. the largest jump in the cumulative area histogram (figure 2.b). All structures left of that position in the graph are removed. Two threshold values are used. The first is a threshold value on the largest area ratio between two successive building structures. In case this threshold value is not exceeded, all building structures are considered as being significant, and no filtering is done. The second threshold puts a limit to the proportion of the total building area that can be removed. In case this proportion is being exceeded, the largest unit of the small structures group will be transferred iteratively to the group of larger structures, until the threshold is no longer exceeded. In order to validate the proposed filtering method, significant building structures were manually identified for a subset of 168 urban blocks.
Figure 1: Brussels Capital Region: location and sealed surface cover
Comparing the results of manual and automatic filtering for various threshold values, using a ROC curve (figure 3), revealed that an area ratio of 1, and a maximum proportion of building area to be removed of 4%, resulted in an optimal correspondence between structures being identified as significant manually, and those retained by the automated filtering approach, with an overall accuracy value of 93.7%. As can be seen in the ROC curve, using these threshold values result in a maximum sensitivity and a nearly stable level of (1 - specificity). The result of the filtering process, after removal of non-significant structures is illustrated in figure 2.c.
## 4 Methodology
### Choice and definition of metrics
In this research use was made of spatial metrics originating from the field of landscape ecology, newly developed metrics describing the structure of the built-up environment along radial and contour-based profiles, indices that describe the composition of building and non-building areas, as well as metrics describing the vertical component of urban structure.
Landscape ecological metrics describe land-cover composition and spatial characteristics within the spatial unit(s) (\"landscapes\") considered. Region-based metrics can be computed at three (hierarchical) levels. Metrics defined at the landscape level, in our case an urban block, will provide information about the block as a whole, not pertaining to one particular type of land cover within the block. Class-level metrics describe characteristics of each thematic class, i.e. areas covered by building structures and other areas within the block. Information about individual objects that belong to one class can be obtained by computing metrics at the patch level. In this study patches correspond to groups of attached buildings (or non-building areas) (figure 4.a). In order to distinguish different types of urban morphology, a set of spatial metrics is needed that provides information about the relative presence and the spatial arrangement of building structures and the matrix around it. For both the building and non-building areas, the patch density, the average patch size and coefficient of variation, the largest patch index, the shape index, and the perimeter-area ratio were computed for each block with Fragstats, version 3.3 ([PERSON] et al., 2002).
Above mentioned landscape ecological spatial metrics have the advantage of taking the whole urban block into consideration. On the other hand, they do not explicitly describe the spatial arrangement of building and non-building areas within the urban block. More explicit information about the spatial positioning and arrangement of building structures and non-building areas within the blocks can be captured by describing their occurrence along so called 'building profiles'. Starting with identifying the centroid of each urban block, radial profiles can be constructed in a predefined number of directions, and the alternation of building and non-building areas along these profiles can be registered (figure 4.b). Based on this alternation, different metrics can be defined, which are listed below.
1. Normalised number of building/non-building alternations:
\[NNA=\left(\frac{\sum\limits_{j=1}^{i}\Delta(0\leftrightarrow 1)_{j}}{I_{wt}} \right).100\]
2. Average length of building/non-building areas:
\[\tilde{l}=\frac{\sum\limits_{j=1}^{i}\sum\limits_{j=1}^{i}l_{y}}{n}\]
Figure 3: Obtained ROC-curve, using different threshold values for the filtering process
Figure 2: (a) Urban block with presence of structural clutter, (b) principle of the filtering technique,
(c) urban block after removal of non-significant structures3. Coefficient of variation of the length of building/non-building areas:
\[CV_{i}=\frac{\sigma_{i}}{l}\]
where:\(z=\) the number of profiles,
_d(0\(\leftrightarrow\)1)_ = alternation between building and non-building areas
\(l_{tot}=\) total length of the profiles
\(l_{ij}=\) the length of the \(i^{th}\) structure in direction \(j\)
\(n=\) total number of building/non-building stretches
along the profiles.
\(\sigma_{i}=\) standard deviation of the length of building/non-building stretches
Analogue to the use of radial profiles, alternation between building and non-building areas can also be analysed along contours, constructed parallel to the urban block boundary, with a constant distance specified between two successive contours (figure 4.c). The defined metrics are the same as for the radial profiles, with the addition of specific metrics that describe the alternation between building and non-building areas and their average length along the street side (first contour). As urban morphology often expresses itself clearly along the street side (or by rejecting the urban continuity along the street side), the configuration of building and non-building area along the first contour (street side) describes the two dimensional appearance of the urban fabric along the perimeter of the block. Measuring the configuration along successive contours within the block offers the possibility to describe the internal configuration of urban blocks in an alternative way as compared to the radial profiling approach.
Based on the results of preliminary research, which analysed the sensitivity of radial profile based metrics on an increasing number of extracted profiles, and the contour profile based metrics on an increasing distance between two successive contours, these parameters have been set to 16 radial profiles to be derived, and a distance of 10 metres between two succesive contours.
Next to these three structure-describing methods, information on the appearance of individual dwellings was included in the analysis. Besides the dwelling density, the average dwelling size, and its coefficient of variation, also the ratio between the number of building areas and dwellings was calculated. Furthermore, information about the matrix surrounding the building area was taken into account by including the ratio between sealed non-building and sealed building area, and the ratio between vegetation and building area, as well as the maximum area of sealed non-building patches. Including these metrics offers the possibility to describe the composition of both the building and non-building area in a more specific way.
Perceiving the urban fabric is of course not limited to the two-dimensional space and, as a result, including height information can improve the distinction between different morphological classes. In this study, information about the number of floors for each dwelling has been used as a substitute for elevation data. Using this information, the average number of floors, as well as the maximum occurring number of floors were determined for each urban block. Furthermore, the ratio of the area-weighted number of floors and the footprints area, as well as the ratio of the sum of the vertical building surface - on the edge between building and non-building area - and the footprints area were included. While the first two height describing metrics are rather common and intuitive, the latter two describe the vertical appearance of the urban structure, in relation to its footprint.
### Urban typology
Using the above defined spatial metrics, which allow describing various characteristics of the built-up environment, the potential of these metrics for describing and discriminating between different types of urban morphology were tested. The typology that was defined consists of 10 distinct types of urban morphology that arose during the historical evolution (densification and expansion) of the Brussels Capital Region. A brief description of the morphological classes can be found below:
_1. Detached_: suburban blocks, individual buildings, often surrounded by vegetation; _2. Semi-detached_: mid suburban blocks, several collections of individual dwellings parallel to the street side, similar to the Garden Cities concept; _3. Hausmannian expansion_: continuous dwellings along the street side, 4-5 floors high, similar to the Hausmann expansion in Paris; _4. High-rise blocks_: continuous dwellings along the street side, empty plots filled up with high-rise (\(>\)=8 floors) buildings; _5. Landmarks_: (mostly) single buildings, e.g. churches, skyscrapers, etc.; _6. Industrial/Connectical_: large buildings, often showing no clear structure in relation to the road network; _7. Open plan:_ often individual buildings showing a clear pattern, not related to the road network, _8. Continuous with front gardens_: adjacent dwellings parallel to the street side, sealed or vegetation surfaces in front; _9. Continuous street side:_ adjacent dwellings along the street side; _10. Urban green:_ (almost) no buildings, few sealed surfaces, e.g. parks.
In order to analyze the potential of the defined spatial metrics for describing spatial structure and distinguishing between different types of urban morphology, approximately 15 urban blocks, typically representing the above described classes, were visually selected.
Figure 4: Schematic representation of (a) the region based approach, (b) the radial profile based approach, (c) the contour profile based approach
## 5 Results and Discussion
In order to describe the specific characteristics of the building environment and of the matrix surrounding these areas (sealed non-building and green surfaces), all spatial metrics defined in section 4.1 were calculated for the 150 blocks representing the 10 urban morphologies defined in section 4.2. To analyze the contribution of the metrics to the distinction between the different morphologies, the metric values derived for each block were included in a stepwise discriminant analysis (using the Wilk's Lambda criterion and an f-probability to enter of 0,05 and 0,10 to remove). This results in a set of discriminant functions, to which each of the metrics contributes to a certain degree.
The first 4 discriminant functions account for 86,8% (respectively 41,0; 17,9; 15,3 and 12,5%) of the total variance. The scores on the first discriminant function are highly influenced by _dwelling density_ and the _ratio between the number of building areas and dwellings_. As such, this function reveals information on the occurrence of attached/detached dwellings within the urban block. For the second discriminant function, the most important information is gathered from the _coefficient of variation of the length of (non-)building segments along radial profiles_, the _average length of building segments along the first contour_, the _ratio between the area weighted number of floors and the footprints surface_ and the _building density_. On the one hand this function expresses both the street side pattern and the regularity of the inner area of the urban block, on the other hand it contains information on the regularity of the vertical component of the urban structure.
The third discriminant function relates to the presence and the regularity of the shape of building areas, by having _the shape index_ and the _perimeter-area ratio for building structures_, as well as the _building density_ as the most decisive variables. Similar to the second discriminant function, the scores on the fourth discriminant function are highly determined by the _coefficient of variation of the length of (non-)building segments along radial profiles_, the _average length of building segments along the first contour_, the _maximum number of floors within the urban block_ and the _building density_. As such this function also relates to the inner spatial pattern of the urban block, but instead of the regularity, it's more determined by the maxima of the vertical component.
Figure 5 shows the position of the urban blocks in a two dimensional space, defined by the first two discriminant functions. As can be seen, the urban blocks largely occupied by urban green, and the blocks characterized by continuous dwellings along the street side, are quite distinctive with respect to the other morphology types. These two classes have respectively low and high scores on the first discriminant and high scores on the second discriminant, and are located at the end of a V-shaped curve along which the other morphologic classes are positioned. Except for the Haussmannian and high-rise urban blocks, which demonstrate a strong variation in discriminant space, the morphologic classes show up as rather compact groups in the plot.
As could be expected from this plot, the highest confusion (table 1) occurs between detached and semi-detached blocks, between blocks characterised by landmarks and a typical industrial/commercial morphology, and between high-rise, Haussmannian and the continuous along the street side morphology, due to some common characteristics of these classes.
Nevertheless, with an overall accuracy of 87,3%, the use of spatial metrics for distinguishing different types of urban morphology shows clear potential. Nevertheless, one should be cautious when interpreting these results, as they are based on a training set of 150 urban blocks which were all identified as typical representatives of the 10 defined classes. As the urban fabric is often a mix of different morphologies, many urban blocks will position themselves somewhere in the continuum between these typical classes.
## 6 Conclusion
In this paper, the potential of traditional landscape ecological metrics, as well as newly proposed metrics, based on radial and contour profiles, composition of the non-building area, characteristics of individual dwellings and their relation to the building area, and the vertical component of the urban structure, for distinguishing different types of urban morphology has been investigated. The results show the utility of the proposed approach for describing urban morphology, and demonstrate that traditional landscape ecological metrics and the newly proposed metrics are complementary in capturing the characteristics of different urban morphologies. Taking into account the complex structure of the urban fabric, which consists of urban blocks located along the continuum between typical (pure) morphological classes, research on alternative
\begin{table}
\begin{tabular}{|c|c c c c c c c c c c|} \hline \multicolumn{11}{|c|}{Predicted group membership} \\ \hline & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 \\ \cline{2-11}
1 & 93 & 7 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
2 & 20 & 80 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
3 & 0 & 0 & 75 & 13 & 0 & 0 & 0 & 0 & 12 & 0 \\
4 & 0 & 0 & 0 & 86 & 0 & 0 & 7 & 0 & 7 & 0 \\
5 & 0 & 0 & 0 & 71 & 22 & 7 & 0 & 0 & 0 \\
6 & 7 & 0 & 0 & 0 & 20 & 73 & 0 & 0 & 0 & 0 \\
7 & 0 & 0 & 0 & 0 & 0 & 0 & 100 & 0 & 0 \\
8 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 100 & 0 \\
9 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 100 & 0 \\
10 & 0 & 0 & 0 & 0 & 0 & 7 & 0 & 0 & 0 & 93 \\ \hline \end{tabular}
\end{table}
Table 1: Confusion matrix derived by cross-validation.
For class definitions, see section 4.2.
Figure 5: Discriminant scores for the training setmethods to describe the state of urban blocks (instead of traditional hard classification approaches) is required. Metric-based descriptions of urban morphology, as proposed in this paper, may provide objective information for comparing intra-urban structure for different cities. In that sense, research is also needed on applying metric-based approaches in urban areas with different morphological characteristics.
Another interesting research topic is the relation between urban land use and urban morphology. As some morphological classes (e.g. Open plan, Landmarks, Haussmanian blocks) are not related to one single type of land use, additional data (e.g. socio-economic data) sources will be indispensable for inferring land use information from the morphological characteristics of urban blocks. Taking into account recent developments in urban planning, which increasingly make use of urban growth models, work on the utility of morphology based information as input for urban growth modelling, and on the relation between urban form and function remains an interesting challenge.
## References
* Alberti & Waddell (2000) [PERSON], & [PERSON] (2000). An integrated urban development and ecological simulation model. IN: _Integrated Assessment_, 1, 215-227.
* Barnsley & Barr (2000) [PERSON], [PERSON] (2000). Monitoring urban land use by Earth observation. IN: _Surveys in Geophysics_, 21, 269-289.
* [PERSON] & Barr (1997) [PERSON], & [PERSON] (1997). A graph-based structural pattern recognition system to infer urban land use from fine spatial resolution land-cover data. IN: _Computers, Environment and Urban Systems_, 21, 209-225.
* Environment DG (2004) European Commission
- Environment DG, (2004). Towards a thematic strategy on the urban environment. Communication from the Commission to the Council, the European Parliament, the European Social and Economic Committee and the Committee of the Regions, European Publication Office, Luxemburg, 56p.
* [PERSON] et al. (1992) [PERSON], [PERSON], & [PERSON] (1992). A comparison of spatial feature extraction algorithms for land-use classification with SPOT HRV data. IN: _Remote Sensing of Environment_, 40, 137-151.
* [PERSON] & [PERSON] (1990) [PERSON], & [PERSON] (1990). The use of structural information for improving land-cover classification accuracies at the rural-urban fringe. IN: _Photogrammetric Engineering and Remote Sensing_, 56, 67-73.
* [PERSON] et al. (2005) [PERSON], [PERSON], & [PERSON]. [PERSON] (2005). The role of spatial metrics in the analysis and modeling of urban land use change. IN: _Computers, Environment and Urban Systems_, 29, 369-339.
* [PERSON] et al. (2003) [PERSON], [PERSON], & [PERSON] (2003). Spatial metrics and image texture for mapping urban land use. IN: _Photogrammetric Engineering and Remote Sensing_, 69(8), 991-1001.
* [PERSON] et al. (2002) [PERSON], [PERSON], & [PERSON] (2002). Remote sensing and landscape metrics to describe structures and changes in urban landuse. IN: _Environment and Planning A_, 34(8), 1443-1458.
* McGarigal & Marks (1995) [PERSON], [PERSON] [PERSON]. 1995. FRAGSTATS: spatial pattern analysis program for quantifying landscape structure. USDA For. Serv. Gen. Tech. Rep. PNW-351.
* [PERSON] (1990) [PERSON] (1990). Knowledge-based classification of an urban area using texture and context information in Landsat TM imagery. IN: _Photogrammetric Engineering and Remote Sensing_, 56(6), 899-904.
* [PERSON] et al. (1988) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (1988). Indices of landscape pattern. IN: _Landscape Ecology_, 1, 153-162.
* [PERSON] et al. (2001) [PERSON], [PERSON], & [PERSON] (2001). Measuring emergent properties of agent-based landuse/landcover models using spatial metrics. IN: _Seventh annual conference of the international society for computational economics_.
* [PERSON] (1996) [PERSON] (1996). Integration of spectral and spatial classification methods for building a land use model of Austria. IN: _International Archives of Photogrammetry and Remote Sensing_, XXXI, Part B4, 841-846.
* [PERSON] et al. (2009) [PERSON] [PERSON], [PERSON] [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON] (2009). Quantifying intra-urban morphology of the Greater Dublin area with spatial metrics derived from medium resolution remote sensing data. _Proceedings of the 7 th International Urban Remote Sensing Conference_, URS 2009, IEEE Geoscience and Remote Sensing Society, 20-22 May, Shanghai,PRC.
* [PERSON] et al. (2005) [PERSON], [PERSON] (2005). An approach for analysis of urban morphology: methods to derive morphological properties of city blocks by using an urban landscape model and their interpretations. IN: _Computers, Environment and Urban Systems_, 29 (2), 223-247.
## 8 Acknowledgements
Research funded by a Ph.D grant of the Institute for the Promotion of Innovation through Science and Technology in Flanders (IWT-Vlaanderen).
|
isprs
|
Mapping urban form and function at city block level using spatial metrics
|
Sven Vanderhaegen, Frank Canters
|
https://doi.org/10.1016/j.landurbplan.2017.05.023
| 2,017
|
CC-BY
|
isprs/f7c02588_4272_4e85_b7c7_68e7808c3da6.md
|
# Vegetation endmember extraction in Hyperion Images
[PERSON]
Remote Sensing Department, KNToosi University of Technology, Mirdamad Cross, Valiasr Av., Tehran, Iran,
[EMAIL_ADDRESS]RSON]
Remote Sensing Department, KNToosi University of Technology, [EMAIL_ADDRESS]
[PERSON]
Remote Sensing Department, KNToosi University of Technology, [PERSON]L_ADDRESS]
[PERSON]
Remote Sensing Department, KNToosi University of Technology, [EMAIL_ADDRESS]
###### Abstract
Hyperspectral imaging sensors on environmental applications have high spectral resolution and low spatial resolution so that numerous disparate substances can contribute to the spectrum measured from a single pixel or in the field of view of the sensor. An important problem in Hyperspectral imaging processing is to decompose the mixed pixels into the materials that contribute to the pixel, endmember, and a set of corresponding fractions of the spectral signature in the pixel, abundances, and this problem is known as the unmixing problem. According to the definition, an endmember is an idealized pure signature of a class. Endmember extraction is one of the fundamental and crucial tasks in hyperspectral data exploitation. It has received considerable interest in recent years, with many researchers devoting their effort to develop algorithms for endmember extraction from hyperspectral data. An ultimate goal of an Endmember Extraction Algorithm (EEA) is to find the purest form of each spectrally distinct material on a scene. Endmember extraction tendency to the type of endmembers being derived, and the number of endmembers, estimated by an algorithm, with respect to the number of spectral bands, and the number of pixels being processed, also the required input data, and the kind of noise, if any, in the signal model surveying done. Identifying endmembers that satisfy both physical and mathematical imperatives is a considerable challenge, making autonomous endmember determination the hardest part of the unmixing problem. Of three stages that comprise unmixing, endmember determination is the most closely aligned with the material identification capabilities of unmixing. Non-statistical algorithms or Geometrical approach essentially assume the endmembers are deterministic quantities, whereas statistical approaches view endmembers as either deterministic, with an associated degree of uncertainty, or as fully stochastic, with random variables having probability density functions. In addition, specific features concerning the outputs, inputs, and noise models used by these algorithms are included according to the model specifically distinguishing the properties of endmember-determination algorithms.
In this paper, Endmember Extraction Algorithms (EEAs) applied on a Hyperion image of southern of Tehran, IRAN. A large number of endmembers were suggested to enhance the classification accuracy while the seasonal variation in the spectral response should be taken into account in vegetation classification. We compare results of Geometrical approach in vegetation endmember extraction assistance with vegetation indices.
IEEEexample:BSTcontrol
## 1 Introduction
### Overview
Hyperspectral imaging sensors on environmental applications have high spectral resolution and low spatial resolution so that numerous disparate substances can contribute to the spectrum measured from a single pixel or in the field of view of the sensor. An important problem in Hyperspectral imaging processing is to decompose the mixed pixels into the materials that contribute to the pixel, endmember, and a set of corresponding fractions of the spectral signature in the pixel, abundances, and this problem is known as the unmixing problem.
Availability of new generation of hyperspectral sensors such as the Hyperion has lead to the new challenges in the area of crop type mapping and agricultural management. The sensor's 242 Spectral band between 400 and 2500 nm (level 1B1) and spatial resolution of 30m bear high potentials for agricultural crop discrimination and detailed land use classification[1 ]. Advantages of this technology include both the qualitative benefits derived from a visual overview, and more importantly, the quantitative abilities for systematic assessment and monitoring. In every remotely sensed image, a considerable number of mixed pixels is present. A mixed pixel is a picture element representing an area occupied by more than one ground cover type.Several research objectives were accomplished:
(1) Select optimal bands in hyperspectral images those are most useful in vegetation classification,
(2) Identify optimal endmember, signature spectrum that represents a certain class, for vegetation classification, and
(3) Test effective Endmember Extraction Algorithms for classification of vegetation type.
First pre-processing step which icude radiometric correction and removing atmospheric effect involves calibration and atmospheric correction. We used Fast Line-of-sight Atmospheric Analysis of Spectral Hypercube (FLAASH) in ENVI software for atmospheric correction. In this research, capabilities of Hyperion hyperspectral imagery acquired from an agricultural area located in southern parts of Tehran, Iran,has been evaluated for discrimination of vegetation area from others. Atmospheric correction and other pre-processing operations on the hyperspectral imagery have been performed by using routine procedures.
Spectral unmixing algorithms use a variety of different mathematical procedures to endmember extraction and estimate abundances. Unmixing problem comprises three sequential steps: dimension reduction, endmember determination, and inversion. Because hyperspectral scenes can include extremely large amount of data, some algorithms for spectral unmixing first use image itself to estimate endmembers present in the scene. The dimension-reduction stage reduces the dimension of the original data in the scene. This step is optional and is invoked only by some algorithms to reduce the computational cost of subsequent processing; it also selects bands with higher signal to noise ratio (SNR) to separate endmembers spectra without any data loss. We use the MNF method to achieve this goal. In this paper, we introduce that the site study and EEA that applied on image then emphasize on the second step of unmixing problem. Finally represent the result of this research.
### The Site Study
An agricultural area located in southern parts of Tehran, known as Ahmadabad has been selected as the study site. Wheat and barley are the main agricultural crops in the area. More than 30 fields of detailed ground-truth dataset have been visited in the field and their records have been used as a ground truth data for training and verifying the results of the classification.
### Optimal Bands in Hyperion Images
Hyperion data were acquired over the Ahmadabad village on May 21, 2002 at 06:57:56 GMT. The EO-1 satellite is a sun-synchronous orbit at 705 km altitude. Hyperion data includes 256 pixels with a nominal size of 30 m on the ground over a 7.65 km swath. Well-calibrated data (Level 1B1) is normally available. Post-Level 1B1 processing of the dataset, as performed in this study, included correction for bad lines, stringing pixels and smile, atmospheric correction and co-alignment. Hyperion data is acquired in pushbroom mode with two spectrometers. One operates in the VNIR range (including 70 bands between 356-1058 nm with an average FWHM of 10.90 nm) and the other in the SWIR range (including 172 bands between 852- 2577 nm, with an average FWHM of 10.14 nm). 44 of 242 bands including bands 1-7, 58-76 and 225-242 are set to zero by TRW software during Level 1B1 processing [\({}^{2}\)][\({}^{3}\)]. Post-level 1B1 data processing operations for preparation of the Hyperion data for classification including band selection, correction for bad lines, striping pixels and smile, a pixel-based atmospheric correction using FLAASH [2] and a co-alignment were performed.
## 2 Identify Optimal Endmember
An important problem in Hyperspectral image processing is to decompose the mixed pixels into the materials that contribute to the pixel, endmember and a set of corresponding fractions of the spectral signature in the pixel, abundances, and this problem is known as the unmixing problem.
According to the definition, an endmember is an idealized pure signature of a class. Endmember extraction is one of the fundamental and crucial tasks in hyperspectral data exploitation. It has received considerable interest in recent years, with many researchers devoting their effort to develop algorithms for endmember extraction from hyperspectral data.
Hyperspectral imaging sensors on environmental applications have high spectral resolution and low spatial resolution so that numerous disparate substances can contribute to the spectrum measured from a single pixel or in the field of view of the sensor. A mixed pixel occurs due to high spectral resolution or low spatial resolution when more than one material substance present in a pixel, in this case these substances are considered to be mixed linearly or nonlinearly in the pixel.
Recent researches aim to identify the individual constituent materials existing in the mixture pixel, as well as the proportions in which they appear. Spectral unmixing is the procedure by which the measured spectrum of a mixed pixel is decomposed into a collection of constituent object spectra that present in scene. In estimating the constituent members and abundances in pixel spectra, unmixing algorithms incorporate philosophical assumptions regarding the physical mechanisms and mathematical structure by which the reflectance properties from disparate substances combine to yield the mixed pixel spectra. Detection algorithms make similar assumptions when testing for the existence of a specific substance in a mixed pixel. Not surprisingly, the hyperspectral detection and unmixing problems are closely related.
### PPI
The Pixel Purity Index (PPI) has been widely used in hyperspectral image analysis for endmember extraction due to its publicity and availability in the Environment for Visualizing Images (ENVI) software.[4] In this experiment, the PPI was implemented to find endmembers for the image scene in Fig 1, using the same set of randomly generated initial skewers. Fig 2 shows the endmember pixels extracted by the PPI, respectively.
### SMacc1
A new endmember extraction method has been developed that is based on a convex cone model for representing vector data. The endmembers are selected directly from the data set. The algorithm for finding the endmembers is sequential: the convex cone model starts with a single endmember and increases incrementally in dimension. Abundance maps are simultaneously generated and updated at each step. A new endmember is identified based on the angle that it makes with the existing cone. The data vector which is making the maximum angle with the existing cone is chosen as the next endmember to add to enlarge the endmember set. The algorithm updates the abundances of previous endmembers and ensures that the abundances of previous and current endmembers remain positive or zero. The algorithm terminates when all of the data vectors are within the convex cone, to some tolerance. The method offers advantages for hyperspectral data sets where high correlation among channels and pixels can impair un-mixing by standard techniques.[5]. The SMACC has been contrived in hyperspectral image analysis for endmember extraction due to its publicity and availability in the Environment for Visualizing Images (ENVI) software. In Endmember Extraction Algorithm finding the number of endmember is very important. For exmaple Virtual Dimensionality (VD) and hyperspectral signal identification by minimum error (Hysime) are two method estimate number of endmember but computational complexities high. [7]
## 3 Test Effective Emmember Extraction Algorithms for Classification of Vegetation Type
The Modified Red Edge Normalized Difference Vegetation Index (mNDVI 705) is a modification of the Red Edge NDVI. It differs from the Red Edge NDVI by incorporating a correction for leaf specular reflection. The mNDVI 705 capitalizes on the sensitivity of the vegetation red edge to small changes in canopy foliage content, gap fraction, and senescence. Applications include precision agriculture, forest monitoring, and vegetation stress detection. The mNDVI 705 index is defined by the following equation:[7],[5]
\[mNDVI_{705}=\frac{\rho_{50}-\rho_{705}}{\rho_{705}+\rho_{705}-2\rho_{705}} \hskip 28.452756 pt(1)\]
\[\text{Where :}\hskip 14.226378 pt\rho_{505}=\text{reflectance at 750 nm}\]
\[\rho_{705}=\text{reflectance at 705 nm}\]
\[\rho_{705}=\text{reflectance at 445 nm}\]
In this experiment, the mNDVI 705 was implemented to find vegetative area on the image scene (Fig 5).
Then mask vegetation area from original hyperion image. We applied SMACC to find endmember for the alternative endmember number p =7,10,20,30. In this experiment we knew how the number of endmember will help us to find pure pixels and mixed one. If the number of endmember is over the real, the algorithm fired mixed pixel shuch as pure. For accuratered image classification using of vegetation index was suitable. With this approach we can estimate the number of vegetative endmember are less than 11 and other endmember related to mixed pixels.this result depicted in figure (6-9).
Fig 4: Hyperion SMACC Spectral Library for 10 endmember
Fig 5: Hyperion image of area that mNDVI 705 \(>\) 0.3
Fig 3: Hyperion SMACC Abundance image
Fig 6: Hyperion SMACC Spectral Library P=7
Fig 7: Hyperion SMACC Spectral Library P=1
## 4 Conclusions
Endmember extraction provides a powerful tool for analysis of the highly redundant, pixel spectra and channel images of hyperspectral data sets. An ultimate goal of an Endmember Extraction Algorithm (EEA) is to find the number of each spectrally distinct material on a scene. Endmember extraction tendency to the type of endmembers being derived, and the number of endmembers, estimated by an algorithm, with respect to the numbers of spectral bands and the number of pixels being processed. Also the required input data, and the kind of noise, if any, in the signal model surveying was done. Identifying endmembers that satisfy both physical and mathematical imperatives is a considerable challenge, making autonomous endmember determination the hardest part of the unmixing problem. SMACC approach with constraints requiring positive abundances and constraints on the maximum number of endmembers for a pixel model provides a detailed physical description of the spatial and spectral features of hyperspectral imagery. The approach can extract spectra that account for environmental and illumination variations in the spectral data and model the variations in the non-extreme spectra. The convex factorization approach finds small subsets of end-spectra to model the material types and variations autonomously. End-spectra that describe localized features are candidate anomalous spectra for processing with detection algorithms. Convex factorization applied to channel images determines the spectral bands in the data where images are highly correlated. Sets of images within these bands are nearly scaled copies of each other. Images within these bands could be co-added to increase the signal-to-noise ratio. Images from separate bands can be selected to enhance visualization of spatial spectral boundaries in the data. Whit applying vegetation indices and incorporation by SMACC vegetative endmember extraction done accurately.
## References
* [PERSON],\"On the Atmospheric Correction for a Hyperion Scene\", Department of Civil Engineering National Chiao-Tung University Hsin-Chu, Taiwan.
* [PERSON], \"Hyperion Validation Report\", Boeing Report Number 03-ANCOS-001, July 16, 2003.
* [PERSON], [PERSON], [PERSON], [PERSON],\" Crop Types Classification By Hyperion Data And Unmixing Algorithm\", Map World Forum, Hyderabad, India.
* [PERSON], [PERSON],\" A Fast Iterative Algorithm for Implementation of Pixel Purity Index\", IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, VOL. 3, NO. 1, JANUARY 2006.
* [PERSON], [PERSON], and [PERSON],\" The sequential maximum angle convex cone (SMACC) endmember model\", \"[EMAIL_ADDRESS]; phone 1 781 273-4770; fax 1 781 270-1161; www.spectral.com
* [PERSON],\" Unsupervised Hyperspectral Unmixing\", INSTITUTO SUPERIROT TESCNICO,2006.
* [7] 7 -[PERSON],\" A New Reflectance Index for Remote Sensing of Chlorophyll Content in Higher Plants: Tests Using Eucalyptus\", Journal of Plant Physiology 154:30-36, 1999.
* [8] 8 -[PERSON] and [PERSON],\" Relationships Between Leaf Pigment Content and Spectral Reflectance Across a Wide Range of Species, Leaf Structures and Developmental Stages,\" Remote Sensing of Environment 81:337-354, 2002.
Figure 8: Hyperion SMACC Spectral Library P=20
Figure 9: Hyperion SMACC Spectral Library P=30
|
isprs
|
Convex Geometry Based Endmember Extraction for Hyperspectral Images Classification
|
Nian Zhang, Wagdy Mahmoud
|
https://doi.org/10.1109/icist59754.2023.10367140
| 2,023
|
CC-BY
|
isprs/9796443f_e9d3_411d_922d_b9d88f2385a7.md
|
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XIII-3/W6, 2019 ISPRS-GEOGLAM-ISPRS Joint Int. Workshop on \"Earth Observations for Agricultural Monitoring\", 18-20 February 2019, New Delhi, India
A surface albedo product at high spatial resolution from a combination of sentinel-2 and Landsat-8 data: the role of surface radiative forcing from agriculture areas as a major contributor to an abatement of carbon emission
[PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON]
Footnote 1: [[https://doi.org/10.5194/ispsr-archives-XLI-3-W6-59-2019](https://doi.org/10.5194/ispsr-archives-XLI-3-W6-59-2019)]([https://doi.org/10.5194/ispsr-archives-XLI-3-W6-59-2019](https://doi.org/10.5194/ispsr-archives-XLI-3-W6-59-2019))
Footnote 2: [[https://doi.org/10.5194/ispsr-archives-XLI-3-W6-59-2019](https://doi.org/10.5194/ispsr-archives-XLI-3-W6-59-2019)]([https://doi.org/10.5194/ispsr-archives-XLI-3-W6-59-2019](https://doi.org/10.5194/ispsr-archives-XLI-3-W6-59-2019))
Footnote 3: [[https://doi.org/10.5194/ispsr-archives-XLI-3-W6-59-2019](https://doi.org/10.5194/ispsr-archives-XLI-3-W6-59-2019)]([https://doi.org/10.5194/ispsr-archives-XLI-3-W6-59-2019](https://doi.org/10.5194/ispsr-archives-XLI-3-W6-59-2019))
###### Abstract
Satellite Sentinel-2 offers a global coverage of the Earth surface at the frequency of a few days with pixel size ranging from 10 to 60 meters. Such spatio-temporal resolution fosters an advanced research in agricultural area notably. The role of the surface albedo as a mean to reduce the surface radiative forcing in link to agriculture practice is a real concern. A high resolution (HR) surface albedo is now generated routinely from the observations of satellites Sentinel-2A & -2B for the time being plus Landsat in the near future. The methodology inherited from the Global Land Service of Copernicus is presented with some preliminary results.
Sentinel-2, albedo, agriculture, carbon.
## 1 Introduction
The advent of fleet such like the satellites Sentinel-2A and -2B permits the Earth observation research to enter into a new era owing to a frequent revisit of the globe at appropriate time and spatial resolution. Besides, the level of quality of the data processed from lessons learned promotes an operational use of HR (High Resolution) information, notably in the frame of the European Commission's Copernicus program [1]. This is to answer favourably to an increasing demand for accurate and reliable environmental data. In particular, it aims at providing a permanent monitoring of the land territories. The spectral characteristics of Sentinel-2 sensor allow measuring the biophysical variables describing the vegetation conditions, the energy budget and the water cycle over the entire globe. These generic products can serve numerous applications such as agriculture and food security, weather forecast, climate change impact studies, water, forest and natural resources management. To be outlined that the Sentinel-2 mission expands on the French SPOT and US Landsat missions. Particularly, the footprint of 30 meters from Landsat-8 is compliant with the 10 and 20 meters pixel resolution of Sentinel-2, in order to strengthen both spatial and temporal coverage. A merged reflectance product in a spectral domain covering visible, near infrared and mid-infrared offers new challenge for collecting cutting-edge information about the monitoring of crops. The outcomes of the dissemination of quality-checked HR product will certainly benefit to programs like GEOGLAM (Group of Earth Observation for Global Agricultural Monitoring) for which main concerns are the onset and decay of crops, and early warning. The presentation will highlight the operational methodology to be implemented in order to perform a measurement of the HR surface albedo and also ensure a trimmed monitoring of the worldwide crops.
## 2 Developing a surface albedo product at high resolution
The surface albedo is an Essential Climate Variable (ECV) that needs to be generated on a regular basis in order to ensure continuous estimates as a contribution of the radiation budget to water and carbon balance. Amongst key issues are a timely production, the availability of historical archives, and the consistency of time series as long as possible. First of all, the removal of atmospheric effects must be properly handled. Herein, inputs like water vapour content and ozone are issued from ECMWF (European Center for Medium Weather Forecast). Cloud removal and aerosol correction rely on the MAJA method proved to be efficient for processing multi-temporal and multi-spectral data sets [2]. The present developments are intended to generate a time evolving HR surface albedo product with pixel size of 10 meters. A composite period of 60 days must be considered to gather sufficient observations in order to build a BRDF product. The surface albedo is refreshed during synthesis periods of 10 days. Due to the scarcity of clear scenes, a few parameters are to be retrieved to make the product. Hence, the method - to be operational - makes use of the well-established approach based on a semi-empirical BRDF kernel-driven [3]. The BRDF model is applied to Level 2A of selected Sentinel-2 bands as displayed in Figure 1. Broadband albedo products were derived using narrow to broadband conversion coefficients based on numerical experiments using the PROSAIL radiation transfer model. BRDF coefficients can also serve to perform a normalization of the data. In order to best answer users requirements, the surface albedo products are delivered with a Quality Flag and an uncertainty assessment. Also the true age of the product is indicated as being the median value of the clear sky scenes used. The methodology displayed in Figure 1 is similar to a methodology recently developed for PROBA-V sensor to obtain a 300 meters surface albedo product in the frame of the Copernicus Global Land Service [4].
Time series for 2018 of Directional-Hemispherical (AL-BB-DH) and Bidirectional-Hemispherical (AL-BB-BH) shortwave albedo products at 10 meters resolution are shown in Figure 2. It is also shown Sentinel-2 level 2A data sets produced by MAIA processing chain at CNES. Also reported are ground data from the two ICOS stations of Aurade and Lamasquere located near Toulouse and maintained by CESBIO. These sites are covered by crops (maize, wheat, and sunflower, merely) overbring by albedometes mounted on a tower flux. Two comparable INR stations close to the Mediterranean coast are also considered for validation. The cross-evaluation between satellite and in situ albedo products looks at the rendered seasonality. The HR sensor footprint also permits a comparison with the signal intensity. It follows that error analysis will help pointing out the possible role of BRDF sampling and aerosol on the accuracy assessment of spectral surface albedo.
Another method based on the training of a Neural Network (NN) from PROSAIL simulations is also considered. The method was proven successful to obtain surface albedo product from HR sensor FORMOSAT [5]. Such approach is powerful to compute the surface albedo from a limited data set but shows limits to perform the extrapolation of the data information, depending on the sampling quality. NN will serve to establish relationships between the surface albedo estimated from BRDF kernel-driven model and limited angular sampling and the true surface abedo. In both approach, a consolidated method for normalizing and merging data is be considered and applied [6].
Hitherto, a continuous validation of the surface albedo product based on the NN approach is performed by INRA over various ecosystems of the area Avignon-Crau-Camargue for the case of reference of MODIS and PROBA-V. It is based on four ground-based measurements. Recently, procedure for a systematic accurate assessment of surface albedo products was developed and implemented for both high and moderate spatial resolution sensors [7][8]. Some tests to be considered preliminary consisted to calculate the surface albedo from Sentinel 2A for the period 2016-2017. It was a simple linear combination of the spectral reflectance value corrected from atmospheric effects. It comes out that a surface albedo product calculated such a way is particularly sensitive to the geometry of acquisition of the Sentinel-2 pixel. This feature was observed whereas 10-days routine acquisitions were obtained from two different Sentinel-2 orbits taken away in the time of three days with one scanning eastward and the other one westward (zenith angle close to 8\({}^{\circ}\) for the two orbits and azimuth angle of 106\({}^{\circ}\) and 286\({}^{\circ}\), respectively). The two orbits provide albedo values to be significantly different on a systematic basis (pixel difference between 0.01 and 0.05 in absolute albedo unit). Such deviations explain in part by the use of similar linear combinations for the different viewing directions. To be outlined that atmospheric correction could also have an impact. However, simulation of surface albedo from NN using now narrow to broad-band linear relationships for different viewing geometry of Sentinel-2 improved the results in regard to the temporal coherence and retrieved precision (see Figure 3).
Figure 1: Flow chart of the algorithm for BRDF model inversion and albedo determination.
Figure 3: Comparison of estimated albedo by comparison with ground-based measurement over two sites located at La Crau and in Avignon. Left hand: a unique linear combination. Right hand: a different linear combination for each geometry of measure.
Figure 2: Time series of HR surface albedo products over the ICOS stations of CESBIO.
## 3 Summary and Future Prospects
An initiative is undertaken to provide operational global estimates of the surface albedo fields (spectral and broadband). It is based on a combination of Sentinel-2A & -2B and Landsat observations to finally deliver products at spatial resolution of 10 meters. This will clearly offer a great potential in agricultural area. Owing to a high spatial resolution, and despite the lack of thermal bands for Sentinel-2, it is glimpsed that surface albedo product quality would benefit of reliable cloud detection due to the high pixel resolution that allows discarding notably low broken clouds. Besides, the analysis will be implemented to map distinctly snow-free and snow pixels with different strategies of temporal sampling, and also bare soil and vegetation pure albedo products.
The ongoing validation is based on a cross-comparison with ground networks like ICOS stations maintained by CESBIO in south-west of France. Also, it is foreseen to degrade the high resolution albedo until a spatial resolution that will permit a fair comparison with consolidated surface albedo products issued from MODIS and also PROBA-V in the frame of the Global Land Service of Copernicus. To be outlined that the narrow to broadband conversion - based on numerical experiments from PROSAIL - represents an important step. The production of dynamic maps of the spectral surface albedo from Sentinel-2 bands, further converted into broadband in support to water and carbon studies, is foreseen. The product spatial resolution of 10 meters is fully compliant with crops in terms of description and validation. Therefore, the product should offer new insights to our knowledge of agriculture at broad scale.
## Acknowledgments
The authors feel indebted to the French Space Agency CNES and TOSCA committee for supporting this work.
## References
* [1] Copernicus Global Land service: [[http://land.copernicus.eu/global/](http://land.copernicus.eu/global/)]([http://land.copernicus.eu/global/](http://land.copernicus.eu/global/))
* [2] [PERSON], [PERSON], [PERSON] and [PERSON], A Multi-Temporal and Multi-Spectral Method to Estimate Aerosol Optical Thickness over Land, for the Atmospheric Correction of FormoSat-2, LandSat, VENuS and Sentinel-2 Images, _Remote Sensing of Environment_, 2015, 7, pp. 2668-2691; doi: 10.3390/rs0302668.
* [3] [PERSON], [PERSON], and [PERSON], 1992. A bi-directional reflectance model of the earth's surface for the correction of remote sensing data. _Journal of Geophysical Research_, pp. 97, 20455-20,468.
* [4] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON], Surface albedo and TOC-R 300m products from PROBA-V instrument in the framework of Copernicus Global land Service, _Remote Sensing of Environment_, in press.
* [5] [PERSON], [PERSON] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON], [PERSON], 2008. Albedo and LAI estimates from FORMOSAT-2 data for crop monitoring. _Remote Sensing of Environment_. pp. 113, 716-729.
* [6] [PERSON], [PERSON], and [PERSON], Spectral normalization and fusion of optical sensors for the retrieval of BRDF and albedo: Application to VEGFATION, MODIS and MERIS data sets, _IEEE Transactions on Geoscience and Remote Sensing_, Vol. 44, 11, Part 1, pp. 3166- 3179, 2006.
* [7] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2015. The MODIS (collection V006) BDF/Blebed product MCD43D: temporal course evaluated over agricultural landscape. _Remote Sensing of Environment_, pp. 170, 216-228.
* [8] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2016. Uncertainty assessment of surface net radiation derived from Landsat images, _Remote sensing of Environment_, pp. 175, 251-270.
|
isprs
|
A SURFACE ALBEDO PRODUCT AT HIGH SPATIAL RESOLUTION FROM A COMBINATION OF SENTINEL-2 AND LANDSAT-8 DATA: THE ROLE OF SURFACE RADIATIVE FORCING FROM AGRICULTURE AREAS AS A MAJOR CONTRIBUTOR TO AN ABATEMENT OF CARBON EMISSION
|
J.-L. Roujean, A. Olioso, E. Ceschia, O. Hagolle, M. Weiss, T. Tallec, A. Brut, M. Ferlicoq
|
https://doi.org/10.5194/isprs-archives-xlii-3-w6-59-2019
| 2,019
|
CC-BY
|
isprs/659b742d_b5b5_4cb4_beaa_da62f7df2dd7.md
|
# A Book Retrieval and Location System Based on Real-Scene 3D
[PERSON]\({}^{1,2,3,4}\)
[PERSON]\({}^{1}\)
[PERSON]\({}^{1}\)
[PERSON]\({}^{1}\)
[PERSON]\({}^{1}\)
\({}^{1}\)School of Geomatics and Urban Spatial Informatics, Beijing University of Civil Engineering and Architecture, Beijing, 102616
\({}^{2}\)Engineering Research Center of Representative Building and Architectural Heritage database, Ministry of Education, Beijing, 102616
\({}^{3}\)Key Laboratory for Urban Geomatics of Ministry of Natural Resources, Beijing, 102616
\({}^{4}\)Beijing Key Laboratory for Architectural Heritage Fine Reconstruction & Health Monitoring, Beijing, 102616
###### Abstract
With the development of urbanization, the building structures are more and more complex, and various moving objects, such as human beings, robots and unmanned aerial vehicles, often travel through indoor and outdoor 3D space, which puts forward higher requirements for the accurate search and location in indoor and outdoor space. At present, most of the spatial location methods for indoor entities are carried out through 2D maps. However, the indoor environment is a complex 3D space, which increases the difficulty of the search process. In addition, 2D map cannot accurately display the 3D spatial position of the entity. Therefore, it is difficult for 2D maps to search and locate in complex environment. Therefore, how to quickly and effectively carry out spatial location query in complex indoor environment has become an urgent problem to be solved. Taking the library of Beijing University of Civil Engineering and Architecture as an example, this paper obtains the indoor 3D information of the library based on SLAM, processes and publishes the acquired 3D information on IndoorViewer, and uses its API in the book retrieval system. Finally, a book retrieval and location system based on real-scene 3D is finished.
T rolley SLAM, IndoorViewer, POI, API, Book Retrieval +
Footnote †: Corresponding author
## 1 Introduction
With the development of mobile internet technology and the maturation of indoor positioning technology, people's demand for navigation applications has expanded from the original outdoor navigation to the integrated navigation inside and outside the room. The way to locate and navigate to the target accurately in the complex interior 3D environment has become an urgent problem to be solved in today's society. Though 2D map is still the main way of indoor and outdoor navigation and location, its drawbacks such as poor map representation and augmented reality, simple model, lack of intuition, and inaccurate positioning for 3D target are particularly prominent in complex indoor space.There is no doubt that 2D navigation cannot meet the actual positioning needs of various moving objects, such as human, unmanned aerial vehicles and robots, and cannot apply to the increasingly complex indoor space ([PERSON] etc. 2018). 2D maps cannot accurately represent the location of the target in the complex 3D space. It is difficult to satisfy the user's spatial search and location requirements in complex environments.
Considering the contradiction between the rapid development of the application of GIS from outdoor to indoor and the lag of theory and technology methods ([PERSON], 2012), this paper takes the library of Beijing University of Civil Engineering and Architecture as an example, aiming at the difficulty for teachers and students to find books in the library, used SLAM technology to acquire the indoor spatial data of the library, including point cloud data and 3D real scene images. The acquired data are processed in IndoorViewer, POI are established for the bookschives of the library, and the spatial location of books is determined according to the corresponding relationship between books and bookshelves, so as to realize the function of path planning. In the book retrieval system, API interface of IndoorViewer is used to realize a book retrieval and positioning system based on 3D real scene.
## 2 Principles and Methods
### Data acquisition method based on trolley SLAM
SLAM (Simultaneous Localization and Mapping), also known as CML (Concurrent Mapping and Localization), is a kind of synchronous positioning and mapping technology commonly used in robots. On the one hand, it relies on the created map information for self-positioning. On the other hand, it updated the map according to the positioning results ( [PERSON] et al. 2013 ). When the robot is in an unknown environment, it can use SLAM to move while drawing a complete map of the environment, so that it can travel through the indoor environment effectively. In terms of indoor 3D, SLAM can be applied to acquire indoor 3D information ([PERSON] et al. 2016 ) : Scanning indoor space imputation by trolley SLAM ([PERSON] et al. 2013) can obtain high-precision 3D real scene images and point cloud data. The SLAM system of cart uses Gamping algorithm ([PERSON] et al. 2007), which combines vertical 2D laser scanner to obtain indoor 3D information. The Gamping algorithm combines high-precision wheel ranging information. The experimental results show that the pose error is less than 2 cm([PERSON] et al. 2018 ). This paper used the NavVis M3 scanning vehicle (Figure 1) as the scanning tool to collect indoor 3D information in the library of Beijing University of Civil Engineering and Architectural.
#### 2.1.1 Scanning principle
In this experiment, the NavVis M3 Indoor scanning vehicle is used as the scanning tool. It is equipped with six cameras with 16 million pixels, a horizontal laser scanner and two vertical laser scanners ( [PERSON] et al. 2019). During the scanning process, the scanner takes siximages after moving on for a while, and the images are stitched together to achieve the synthesis of 3D real scene images. In the course of the journey, the laser scanner can capture within 30 meters and 270 deg range of the scene by continuous scanning, in order to achieve full access to the indoor 3D information, and use the location algorithm for the obtained data to locate themselves and provide the required data of map construction and then generate point cloud data.
The obtained data can be processed to produce 3D real scene map and indoor 2D floor plan. The obtained point cloud data is available for spatial analysis and modeling ([PERSON]).
#### 2.1.2 Scanning method
The scanning scope of the experiment is the library in Beijing University of Civil Engineering and Architectence, which has a total of 8 floors (including the underground floor). The experiment mainly scanned the first floor, the book storage floor (the second to fifth floor) and the university's history museum (the sixth floor).
Before starting the scanning, it is necessary to plan the scanning path, and make different scanning schemes based on the internal structure characteristics of each floor of the library. The planning of the scanning path requires consideration of the following factors:
1. In order to improve the accuracy of the data, it is necessary to establish a closed loop of the dataset. Part of the path needs to overlap in order to achieve registration by using the route intersections.
2. Because of the variable character of the indoor environment, for example, the adjustment of the library tables distribution and chairs will change the spatial layout, in order to reduce the workload of future re-scanning, scanning should be subdivided.
3. Before scanning, it is necessary to measure the width of the indoor channel to confirm whether the scanner can pass, which provides a reference for scanning path planning.
4. Scanning needs to be done floor by floor, when a floor scan is completed,scanning should be over. The scanning vehicle should be moved to the next target floor.
5. The maximum time for a single scan is 45 minutes, otherwise the best effect cannot be achieved.
After starting the scanner and setting the relevant parameters, you need to position it : rotate the cart slowly for two laps to get its own position. During the scanning process, the cart needs to be moved at a slow speed (no more than 6m/s), and the speed is maintained evenly. Every 1 meter of travel, the cart beeps, then the cart should be stopped and take images of the surroundings. During the scanning process, you need to pay attention to the following issues:
1. The trolley can not travel too fast, so as not to affect the quality of the images.
2. The cart needs to be kept at a distance of about 1 meter from the surrounding wall, otherwise the image will not work effectively. Too far distance from the wall will reduce the fineness of the image ([PERSON], 2018).
3. While scanning in the broad space, the route should travel according to the planning ranks. In addition, while walking along the closed-loop route, laser scanner should avoid losing reference points.
4. Get high-quality three-dimensional images at the corner of the route by manually shooting for multiple times.
5. Pay attention to scan the elevator and connect the upper and lower floors near the ladder, to easily build a navigation network.
6. When encounter obstacles that lead to not pass properly, you should end this scan and start again after passing.
7. During the scanning process, if there are pedestrians moving nearby, pause the scan, wait for pedestrians to pass before continuing.
8. If the scanning person is high, be careful not to take yourself into its sensors.
### IndoorViewer based on the data processing
Based on the SLAM Trolley technology, this article obtains the spatial data of the library, including laser scanner data and real-scene images. Next, you need to use NavVis software to process data: First, the original data, published data and network data need to be preprocessed, and then in the IndoorViewer carrying out the registration of Raw point cloud data (relative registration of data units and global registration), followed by we create navigation diagrams and cloud maps of IndoorViewer in order to facilitate subsequent indoor path navigation, and finally upload the above processed data to the network side and publish.
#### 2.2.1 About IndoorViewer
IndoorViewer is NavVis'browser-based application for visualization, digital enrichment and navigation of indoor spaces. Core features of the web-based IndoorViewer client are:
1. Browsing 360 deg panorama views of an indoor space mapped with NavVis technology.
2. Browsing, creating, and editing points of interest (POI): right-click anywhere to create a POI in 3D space.
3. Route planning in 3D between any locations in a NavVis-mapped indoor space.
4. Point cloud display and point cloud alignment tools for dataset organization.
IndoorViewer helps users get information about public places and business spaces and get a first impression, and users can search for indoor destinations in advance and get a detailed walking direction in the interior space.
Figure 1: NavVis M3 Indoor scanning vehicle
Figure 2: Scanning process
InDoorViewer Based on the most advanced HTML5 and WebGL technology, which is available at all Web Browsers and could run in a variety of desktops and mobile systems, with good compatibility.
#### 2.2.2 Data preprocessing
Data preprocessing is mainly divided into raw data preprocessing, network data preprocessing and publishing data preprocessing. Between them, raw data preprocessing requires working with all raw data in the target directory and to preserve the results; and start post-processing of dynamic object deletion on all recorded datasets, which called Navvisitast-postprocessing sh, and use-Filter-dynamic-objects to delete objects moved during a scan (Pedestrians walking around). Publishing data preprocessing and network data preprocessing is mainly to create an instance of the project and load the scanned data unit into the instance.
After the instance is registered and logged in, the interface of IndoorViewer displays a number of administrative icons, clicking on the management settings and it will display the actual area properties of the current instance. At this point, the data preprocessing work is complete.
#### 2.2.3 Point cloud registration
The aim of Point Cloud Registration is to make the point Cloud dataSet to match and associate with the formation of Indoor space between the floors,between indoor and outdoor, and between its own geographical location and the global map's correct location association including relative registration of data units (Datasets Alignment) and global Registration (Geo-registration).
Relative registration: In the IndoorViewer, opening the registration interface. It will be divided into amplitude, displaying out of four pieces of pictures (building top view, building point cloud data, a side view from north to south and a side view from east to west). First, a data unit is specified as a reference data unit, so that all data units rotate at the starting point of the selected unit as the origin, using theTransform Angle rotation and axial panning options in menus, combined with four view images. Precise point cloud registration. In this article, the elevator wells and stairs of each floor are used as reference for registration and splicing, and the projection overlap degree of each layer in the top view is checked, and the relative registration of each data unit in the library is completed by careful verification and comparison.
Global registration: It is based on OSM ( Open Street Map), the main purpose of the location registration method is to set the located position under the global coordinate system to pave the groundwork for subsequent generation of navigation diagrams and cloud maps. Because of the defects in the degree of refinement of OSM, with the help of Google Maps and other third-party platforms for auxiliary positioning[11]. According to Google MapsFor a comparison reference, we drag and drop the previously completed relative registration of the data unit set toOSMin the corresponding geographic location, the matching point cloud is shown in the figure:
#### 2.2.4 Create navigation diagrams and cloud maps
There are two types of maps are displayed in the interface of IndoorViewer: navigation diagrams and cloud maps. Between them, the navigation diagram shows the possible path network to a position in the way of road network and three-dimensional reality.Cloud Map is a two-dimensional floor plan of the scanning area to understand the plane range of the building.
This article uses the registration parameter file obtained by the previous point cloud registration to create a navigation chart and a cloud map.
#### 2.2.5 Data publishing
After completing the navigation diagram and cloud map creation, you need to upload data to the network server and publish it. The steps are as follows:
1. Log on to the server and create a project directory on the network server.
2. Return to under native command, or reopen a command window.
3. Use rsync command to transmit all data files and Xml files.
4. Create IndoorViewer instances on a network server and publish the data.
The results of the network release of the final indoor three-dimensional reality map are shown in the figure:
Figure 4: Point cloud after registration
Figure 5: Navigation map after registration
Figure 3: Relative registration of data unit
### The Indoor path planning based on IndoorViewer
#### 2.3.1 Add points of interest
Points of Interest (POI) is a term in a geographic information system that generally refers to all geographical objects that can be abstracted as points, especially which are closely related to people's lives, such as stepping ladders, bookshelves in libraries, toilets and so on. Each points of interest contains four parts :name, category, coordinates, category. The main purpose of the point of interest is to describe the address of a thing or an event, which can greatly enhance the ability to describe the location of a thing or an event and the ability to query, and improve the accuracy and speed of geographical positioning.
This paper adds points of interest to the main feature points in the library (elevators, ladders, bookshelves, toilets, books and periodicals self-service lending machines). Among them, the bookshelf point of interest to add work is very important: the database of the book retrieval system provides books, categories, corresponding bookshelves and other information, in order to find out the space location of the book in the IndoorViewer, you need to add points of interest to each layer of the bookshelf and add relevant information to achieve the spatial positioning of the book.
In the IndoorViewer, we can right-click the target position in the 3d real scene map and select \"create interest point\". The interface for editing the interest point is shown in the figure below:
The steps for creating POI are as follows:
1. Input the English name of the POI;
2. Select categories for newly created POI;
3. Add a description to the text box;
4. Select permission group;
5. Click \"save\".
Through the function of adding POI of Indoor Viewer, this paper adds POI to the bookshelves, elevators, stairs, toilets and other features of the library of Beijing University Of Civil Engineering And Architecture, which establishes the spatial correspondence relationship and lays the foundation for the follow-up path planning work.
#### 2.3.2 Path planning method
After creating POI, this paper uses the path planning function of Indoor Viewer to navigate the path between two interest points in the library.
After activating the path planning function in the IndoorViewer add-on module, each interest point panel has a \"path planning\" option.Selecting the corresponding starting point (such as the library gate) and the end point (such as the bookshelf), the system automatically calculates and generates the planning path and related navigation information according to the navigation node network, and uses the direction arrows associated in turn to represent the navigation path in the three-dimensional real map; and the two-dimensional plane map in the lower right corner also shows the navigation path, giving users a more intuitive two-dimensional and three-dimensional linkage.In addition, the path planning panel in the upper left corner clearly shows the distance between the two POI and the time required to walk.
Through the function of path planning, users can plan and navigate the path in the three-dimensional scene and cloud map from their own location to the bookshelf corresponding to the book as the end point, and obtain information such as the distance between the starting and ending points and the time required to reach the end point, so as to obtain more effective indoor space navigation assistance.
Figure 8: Adds POI to the bookshelf
Figure 10: Created data sets and POI
Figure 7: Results of live map release
Figure 9: Interest Point Editing Interface
### 2.4 Calling IndoorViewer API interface in the book retrieval system
API (Application Programming Interface) is a set of predefined functions designed to provide applications and developers with the ability to access a set of routines based on certain software or hardware without access to source code or understanding the details of internal working mechanisms.
In this paper, two methods of inserting the Indoor Viewer API into the book retrieval system are designed with IFrame and Javascript respectively, and a comparative study is made on them.
1. Embedding with IFrame: Embedding IndoorViewer with IFrame HTML elements is the easiest way.IFrame can be freely located on embedded pages, just like a separate website.When embedded in IFrame, the Indoor Viewer can be displayed as a window in the view.The effect is shown as follows:
2. Embedding with JavaScript API: Embedding IndoorViewer with JavaScript API provides great flexibility for embedding it into the target website.In this way, the library retrieval system does not need to be hosted in the same domain as the IndoorViewer, only need to include and create an IndoorViewer object.
Finally, this paper realizes a book retrieval and positioning system based on three-dimensional real scene. The system can achieve the following functions:
1. After users enter the system, they can retrieve books through the input box provided by the system. The retrieval system judges the books they need by identical or similar labels, and calculates the similarity of users by identical labels, so as to realize the book recommendation function: if the books that users need are not found or the books have been borrowed, the relevant recommended books will be displayed ([PERSON], 2018).
2. After finding the books that users need, the system obtains the corresponding bookshelves through the database, and invokes the Indoor Viewer interesting point query function to obtain the spatial location of the bookshelves, and then combines the path planning function with indoor positioning technology to implement navigation for users.
3. The user can switch the real image by clicking on the direction arrow of the three-dimensional real map. In addition, the user can click on the position of the two-dimensional plane map to complete the switching of the three-dimensional real image. The user can combine the three-dimensional real image with the two-dimensional plane map to get the position of the bookshelf where the book is located, thus realizing the integration of the three-dimensional real scene and the two-dimensional map.
4. Combining the three-dimensional image and two-dimensional plan, it provides the coming function of the library. Users can know the distribution of Library books, the functional zoning of each floor, the location of toilets and stairvelts through the mobile terminal.
5. Provide an administrator account to manage users, books and navigation paths. If the elevator fails to work, the administrator can set the elevator as an obstacle in the navigation path. The system will bypass the obstacles and find alternative routes in the path analysis.
## 3 Conclusion
Aiming at the problem of \"difficult to find books\" in libraries, combined with the hotspot of indoor navigation and positioning technology, this paper obtains the three-dimensional spatial data of the library of Beijing University Of Civil Engineering And Architecture by using SLAM technology of carts, preprocesses
Figure 11: Path planning in Indoor Viewer
Figure 12: Implementing the effect by invoking API with IFrame
Figure 13: Real-scene three-dimensional book retrieval systemthe data, registers point cloud and global registration, generates navigation map and cloud map, and publishes them on the network; it is important for libraries including bookshelves in Indoor Viewer. The feature adds interest points and realizes the function of path planning by expanding module. By calling the API interface of IndoorViewer, the function of IndoorViewer is embedded in the book retrieval system, and the book retrieval and positioning system based on three-dimensional scene is realized.
Compared with other book retrieval systems, this paper uses three-dimensional scene to locate books for the first time, expanding the location of books and periodicals from two-dimensional plane map to three-dimensional space, enabling users to search books quickly and effectively in the complex indoor environment of the library; the linkage and switching of three-dimensional and two-dimensional plane map, the combination of three-dimensional scene and navigation semantic network also make queries.The accuracy and intuition of the results have been greatly improved, and a more friendly user experience has been created.
This paper has broad application prospects for the design method of real-scene three-dimensional retrieval and positioning system: it combines with indoor navigation technology, achieves real-time navigation function by acquiring the location of users; applies to large public places such as shopping malls, airports, stations, etc., to provide services for target location query in complex space; realizes the integrated indoor and outdoor navigation of intelligent robots and UAVs, and promotes it.The development of logistics, transportation and emergency [PERSON].
## References
* [1] [PERSON]. Modeling method of indoor three-dimensional navigation in shopping malls [D]. Nanjing Normal University, 2012.
* [2] [PERSON], [PERSON], [PERSON], etc al. Integrated three-dimensional navigation path planning for indoor and outdoor digital campus [J], _Surveying and Mapping Bulletin_, 2018 (10).
* [3] [PERSON], [PERSON], [PERSON]. Summary of simultaneous localization and map creation based on graph optimization [J], _Robot_,2013, 35 (4).
* [4] [PERSON], [PERSON]. Indoor mobile measurement system based on SLAM and its application [J]. _Surveying and mapping bulletin_, 2016 (06): 146-14
* [5] [PERSON], SCHROTH [PERSON], [PERSON] [PERSON], et al. TUMindoor: an extensive image and point cloud dataset for visual indoor localization and mapping [C/IEEE International Conference on Image Processing. Orlando: IEEE, 2013.
* [6] [PERSON] [PERSON], [PERSON], [PERSON]. Improved techniques for grid mapping with rao-blackwelled particle filters [J]. _IEEE Transactions on Robotics_, 2007, 23 (1): 34-46.
* [7] [PERSON], [PERSON], [PERSON], etc. Summary of SLMA Indoor 3D Reconstruction Technology [J], _Surveying and Mapping Science_, 2018 (07).
* [8] [PERSON], [PERSON], [PERSON]. SIAM indoor real scene three-dimensional mapping and its application [J]. _Surveying and mapping bulletin_, 2019 (01).
* [9] [PERSON]. [[http://www.wtechgnss.com/](http://www.wtechgnss.com/)]([http://www.wtechgnss.com/](http://www.wtechgnss.com/))
* [10] [PERSON]. Indoor scene three-dimensional mapping based on SLAM trolley [D], Beijing University of Architecture, 2018.
* [11] [PERSON]. Design and Research of Label-based Personalized Book Recommendation System [D], Shandong Normal University, 2018.
|
isprs
|
A BOOK RETRIEVAL AND LOCATION SYSTEM BASED ON REAL-SCENE 3D
|
S. Wei, B. Li, Z. Guo, S. Guo, L. Cheng
|
https://doi.org/10.5194/isprs-archives-xlii-2-w13-903-2019
| 2,019
|
CC-BY
|
isprs/16c68e21_ddce_429e_a2cf_18bfe6c22aa8.md
|
# Architectural Heritage at Risk:
the case of the Magnano hamlet (Piedmont, Italy)
[PERSON] 1, *, [PERSON] 2, [PERSON] 1
1 CNR-Institute of Heritage Science, Sesto Fiorentino, Italy - (fabio.fratini, silvia.rescic)@cnr.it
2 Politecnico di Torino, Dipartimento Architettura e Design - [EMAIL_ADDRESS]
###### Abstract
The architecture is the result of the men's work, whose actions do not end with its construction, but are inevitably destined to follow one another over time, responding to the various natural and/or anthropic solicitations to which it is subjected. The progressive change in social and economic needs, together with the lack of recognition of the value of the inherited architectural heritage, causes an increasing pressure both on individual historic buildings and on ancient sites. Considered as obsolete and incapable of responding to current performance and functional requirements, the architectural heritage is often abandoned or subjected to radical transformations, causing the irremedable loss of valuable cultural resources. The examination of the current state of conservation of the Magnano hamlet is interesting for the purposes of 'Risk in architectural heritage'. It is a defensive settlement built at the beginning of the XIIIth century and characterized by the presence of building cells lying on the crest of a hill and surrounded by walls. Some of these cells are today totally abandoned and, therefore, affected by decay phenomena due to the aggressive action exerted by environmental agents; others have undergone interventions that, although aimed at allowing their possible re-use, have led to the partial or total loss of the identity features of this heritage. This paper intends to focus on the analysis of the interventions carried out, examining the methodologies adopted and some critical issues in the belief that only by searching for compatible solutions in terms of materials, structures and functionality, it is possible to become promoters of an effective conservation of the architectural heritage.
Architectural heritage, Risk, Preservation, Compatibility, Identity, Re-use +
Footnote †: Corresponding author
## 1 Introduction
Article. 30 of the Code of Cultural Heritage (2004) establishes that the State, the regions, the territorial public authorities as well as the private owners have to guarantee the safety and conservation of their cultural heritage properties.
According to the art. 29 of the aforementioned Code, conservation should be ensured \"through a coherent, coordinated and planned study, prevention, maintenance and restoration\". The latter is defined as \"direct intervention on the asset through a complex of operations aimed at both its material preservation and recovery, and at the protection and transmission of its cultural values\". However, the analysis of the current state of conservation of buildings belonging to the Magnano hamlet (Piedmont, Italy) highlights how these purposes have not always been pursued. Although the buildings of the hamlet have been subject to regulatory and restrictions since the early twentieth century (not. Min. 09/06/1908), sometimes the architectural assets have undergone interventions that have modified their identity and compromised their conservation over time. The progressive change of social and economic needs, together with the lack of recognition of the cultural value of the assets inherited from the past, determines increasing pressure both on single historical architectures and on ancient sites. Although the re-use of buildings, through the insertion of a new function or the continuation of the original one, is to be considered a mean through which ensure their conservation, often the intervention on the built determines its radical transformation ([PERSON] et al., 2019, Mileto, Vegas, 2007). The desire to make the architectural artefacts responsive to the changed functional and/or performance requirements determines the irreparable alteration of the testimual value of valuable cultural resources. Existing buildings are made suitable for current days. Regrettably, their functional and environmental performances are often improved forgetting their historical and cultural value. Certainly, as [PERSON] stated, architecture is a collective work, built over the long term thanks to the commitment of many generations. It is to be considered a \"living work\" and, as such, destined to undergo continuous changes over its existence. The signs left by modifications, which may be needed from time to time, testify to its evolution and, at the same time, its \"[PERSON], 2007). Projects on existing architectures should be drawn up taking care to control the needed changes. Interventions should be carried out promoting the objective of quality in contemporary additions without endangering the cultural value of the assets (_Framework Convention on the Value of Cultural Heritage for Society_, 2005).
The interventions should provide for the adaptation of the architectural artefacts to new functional and performance requirements, excluding interventions that could determine the alteration of the peculiar characteristics of the buildings ([PERSON], 2019; Vegas, Mileto, 2015; AA.VV. 2014). It is in fact a matter of minimizing the modifications and/or destructions and of planning all the new interventions taking care to respect the signs of the past ([PERSON], 2020; [PERSON], 2018; [PERSON] et al., 2016).
This contribution aims at illustrating the results of an investigation conducted in Magnano's hamlet in order to highlight discrepancies between theoretical issues, regulatory requirements and operating practices. The latter testifies to a lack of respect for the identity characteristics of the architectural artefacts that are altered without identifying appropriate compromise solutions between functional requirements and conservation issues.
## 2 The Magnano Hamlet
### Construction features and current state of conservation
The Magnano hamlet, founded in 1204 ([PERSON], 1978; [PERSON], 1976), is a defensive settlement, created with the aim of guaranteeing the defence of the inhabitants and their assets from the sacging of armed gangs that crossed the counryside in medieval times. It is located in a dominant position on the external ridge of the Serra d'Irena, a relief belonging to the morinic amphitheatre formed thanks to debris transported to the Po valley by the large glacier that ran through the Dora Baltea valley during the Quaternary glaciations (Figures 1 and 2).
The hamlet is characterized by the presence of building cells which, lying on the crest of a hill, are surrounded by defensive walls.
A tower-door equipped with a single arched driveway guarantees the access to the hamlet. The first part of the urban core is characterized by the presence of a single road axis on which there are few surviving buildings, mostly extensively transformed. Continuing east, the road branches of into three parallel streets, located at different altitudes (Figure 3).
These road axes delimit compact blocks, made up of small buildings. Where the building cell is located between two parallel axes, the particular shape of the land has been exploited to ensure direct access by road to both rooms that constitute the single construction (Figure 4).
The buildings have load-bearing walls made of stone and brick. The floors have a double wooden structure. The main beams are positioned parallel to the fixade and act as a chain for the side walls.
The openings are mostly delimited by fired bricks to be referred to interventions carried out in the XVth century on the existing wall structure ([PERSON], 1978). Sometimes, bricks are also used for decorative purposes: elements such as comices and notches adorn some buildings' facade (Figure 5).
Except for some cases, the state of conservation of the constructions are rather critical. Those no longer used are turning progressively into ruins. Others underwent interventions which, although aimed at allowing their possible reuse, led to the partial or total loss of their identity.
Abandoned for years, some buildings are without roof and their stability is guaranteed through temporary static aids, positioned by the municipal administration in order to avoid their collapse (Figure 6).
Generally, the buildings used for residential purposes (permanent or temporary) are better preserved. However, because of the changing needs, lifestyles, aesthetic taste, many interventions have been done often significantly altering their identity characteristics. Subtraction, replacement and addition of parts were carried out without paying enough attention to the elements
Figure 4: Elevation and section of some building cells (after [PERSON], 1978).
Figure 3: Planimetric layout of the hamlet in 1780 (after [PERSON], 1978).
Figure 2: Position of the hamlet in the Magnano village (after Google Earth, modified).
connoting the architectural artefacts. The consequence is the loss of their historical value. In some cases, interventions of integration and/or reintegration of the masonry were carried out attempting to re-propose without success materials and construction typologies similar to the original ones. The window frames have often been replaced changing the typologies and sometimes also the materials. In order to pursue higher energy performances, old roads have been replaced by new and thicker ones altering the original proportions of the buildings.
### Analysis of the interventions carried out in the Magnano hamlet
The interventions conducted on some buildings leads us suppose a lack of shared guidelines aimed at fostering the design of appropriate actions. As a matter of fact, in many cases, actions not enough respectful of the historical and architectural values have been carried out causing damages and compromising the conservation of the assets through the use of inappropriate methodologies and incompatible materials ([PERSON], [PERSON], 2015). Some of these interventions are illustrated below, highlighting some the critical issues. The following table (Table 1) identifies the main causes of the lack of conservation of the buildings and the case studies examined.
#### 2.2.1 Adaptation of the historical built:
Over time, some of the buildings of the hamlet have undergone interventions aimed at ensuring their re-use for residential purposes. They provided for the restoration of missing parts and the reshaping of the buildings in relation to aesthetic states and functional needs of the owners. Specific conservation requirements of the cultural heritage have not been sufficiently taken into account and interventions compromising its integrity and inherent values have been carried out. They led to a significant transformation of the buildings which often lost their specific features. The only recognised value was the use value, which lead to the adaptation of the built to changing conditions in order to make it able to suit the new needs. The insertion of balconies with reinforced concrete structure (Figure 7); the insertion of stone slab coverings along the baseboards and of ceramic tiles on the external floors (Figure 8); the addition of wooden batnens in the overhang of the roof (Figure 9); the replacement of windows and the application of finishing plasters testify to the lack of attention towards the architectural heritage which is altered without paying sufficient attention to its historical-cultural as well as formal characteristics. Integrations and replacements are carried out by compromising the preservation of the architectural artefacts both at the typological and construction level and causing the irremediable loss of their identity.
\begin{table}
\begin{tabular}{|l|l|} \hline
**Cause of the lack of conservation** & **Case study** \\ \hline Adaptation of the historical built & Building D, E \\ \hline Incompatible materials and techniques & Building A, B, D, G \\ \hline Abandonment & Building C \\ \hline \end{tabular}
\end{table}
Table 1: Case study localization.
Figure 5: Brick decorative elements (credits [PERSON]).
Figure 6: Building C - Temporary consolidation intervention carried out by the municipality (credits [PERSON]).
The same applies to restoration interventions that involve the reintegration of destroyed parts of the buildings such as walls, roofs, windows. Although the materials used are similar to the original ones, the treatment reserved for them is totally different. The methods of laying the stone and the bedding mortar change; the thickness of the roof is improved, presumably in relation to the desire to guarantee higher energy performances (Figure 10).
#### 2.2.2 Incompatible materials and technique
Numerous inappropriate punctual interventions have also been done. Restoration of the walls has been often carried out using inadequate laying materials and methods. Particularly noteworthy is the widespread use of cement mortar not only for the bedding of bricks, but also for the reintegration of cracks and heavily eroded bricks (Figures 11 to 15).
Figure 11: Building B - Reintegration of the masonry adopting inappropriate materials (credits [PERSON]).
Figure 8: Building E - Baseboard in stone slabs and an external flooring in ceramic tiles, replacement of the windows and application of a new finishing layer (credits [PERSON]).
Figure 12: Building G - Improper intervention with cement mortar (credits [PERSON]).
Figure 7: Building E - Balcony with reinforced concrete structure (credits [PERSON]).
Figure 9: Building E - Wooden battens in the overhang of the roof (credits [PERSON]).
This intervention is incompatible not only from the mechanical point of view (elastic modulus of the whole), but also as regards the chemical (development of saline efflorescence) and physical compatibility (different behaviour towards water both in liquid and vapour form).
Nevertheless, in some cases, the interventions are characterized by a greater attention to conservation issues.
While attempting to improve the performance of the buildings and make them consistent with the changing needs of users, the external finishes, the characteristics of the openings, some fixtures, are preserved along with the identity of the architectural artefacts.
Interesting in this sense are the examples of Figures 16 and 17. In the first case, the intervention aimed at ensuring the re-use of the building was carried out without changing the characteristics of the masonry surface and of the roof, whose thickness was not altered.
In order to solve the problems due to the lack of light in the interior, windows were replaced adopting thin window frames (Figure 16).
Preservation of the external wall surfaces and their stratifications, as well as of the finishing elements (such as the access portal) characterise also the intervention shown in Figure 17.
## 3 Conclusions
The interventions carried out in the Magnano hamlet highlights how sometimes operations on the buildings seem to take into account mainly functional rather than conservative requests.
Figure 16: Building F - Intervention preserving both the external masonry surfaces and the roof (credits [PERSON]).
Figure 17: Building H - Preservation of the surface of the external wall and of the access portal (credits [PERSON]).
Figure 14: Building A - Reintegration of eroded bricks with cement mortar (credits [PERSON]).
Figure 13: Building A - Grouting with cement mortar (credits [PERSON]).
Figure 15: Building G - Filling of cracks with cement mortar (credits [PERSON]).
The need to intervene on the architectural heritage transforming the constraints imposed by the latter into a design opportunity that enriches the restoration, raising the quality and increasing the opportunities for dialogue between the ancient building and our contemporary languages- is widely shared among the architectural restoration theorists ([PERSON], 2004). However, the examination of the operating practice denounces the risk still affecting the architectural heritage.
Replacement, reintegration and integration are carried out without paying enough attention to the constructions and their specific characteristics. Functional adaptation or energy efficiency operations are carried out favouring the achievement of high performance standards rather than the design of compatible interventions. It is not a matter of denying the possibility to intervene through transformations or additions, nor of promoting the absolute intangibility of the existing, but of advocating a congruent design that provides for a careful study of the needs expressed by users and of the possible alternatives for their satisfaction in agreement with the characteristics of the architecture ([PERSON], 2017). As stated by [PERSON], -maybe the time has come to stop behaving violently by distributing slaps and to change our attitude learning to use the secret of carers which is the precious gift of making things awakken ([PERSON], 2019). This result can be pursued by promoting both the acquisition of greater awareness about the historical-cultural value of the inherited architectural heritage, and the realization of interventions respectful of the identity characteristics of the latter. As far as re-use is concerned, [PERSON] suggests to choose the new utility of the built on the basis of what it can provide, minimizing the change and taking into account the recognised values ([PERSON], 2019). A coevolution approach rather than an adaptive one is promoted. When the introduction of a new different utility is necessary, the requirements should be compliant with what the building can provide respecting its testiminal value.
The article is the result of the authors' joint work. In particular, [PERSON] is the author of paragraph 1 and 2.1, [PERSON] and [PERSON] are authors of paragraph 2.2. The conclusions were written jointly.
## References
* [1]
* [2] AA.VV. 2014: VERSUS: Heritage for Tomorrow-Vernacular knowledge for sustainable architecture, [PERSON], [PERSON], [PERSON] (Eds), Firenze University Press.
* [3] [PERSON], 2019: Rural Architectural Characteristics and Conservation Issues of Aladdingly Village in Bursa (Turkey), [PERSON] et al. (eds.), Conservation of Architectural Heritage, Advances in Science, Technology & Innovation, O Springer Nature Switzerland AG 2019
* [4] Codice dei Beni Culturali e del Paesaggio, D. Lgs. 22 gennaio 2004, n. 42, art. 30
* [5] [PERSON], [PERSON], [PERSON] [PERSON]. 2016: Ancient stone masonry constructions\", in Edited by [PERSON] and [PERSON] \"Nonconventional and Vernacular Construction Materials: characterisation, properties and applications\", Elsevier-Woodhead Publishing, 301-332
* [6] [PERSON], 2002: \"[PERSON] alla prima crociata contro i \"restauri\"\", Ananke, 33, pp. 6-15.
* [7] [PERSON] 2019: Il segreto della carezza ovvero ideario di restauro timido. Nardini Editore, Firenze. Council of Europe. 2005: Framework Convention on the Value of Cultural Heritage for Society ([[https://www.coe.int/en/web/culture-and-heritage/faro-convention](https://www.coe.int/en/web/culture-and-heritage/faro-convention)]([https://www.coe.int/en/web/culture-and-heritage/faro-convention](https://www.coe.int/en/web/culture-and-heritage/faro-convention)))
* [8] [PERSON] 2019: \"A coevolutionary approach to the re-use of built cultural heritage\", atti del 35\({}^{\circ}\) Convegno di studi su Scienza e Beni Culturali: \"Il partimonico cultrale in mutamento: le sifde del riuso\", Bressanone 1-5 luglio 2019, pp. 25-34.
* [9] [PERSON] [PERSON], [PERSON] [PERSON], [PERSON] [PERSON]. 2019: Re-use of a medieval tower between conservation and transformation\", atti del 35\({}^{\circ}\) Convegno di studi su Scienza e Beni Culturali: \"Il partimonio culturale in mutamento: le sifde del riuso\", Bressanone 1-5 luglio 2019, pp. 411-420.
* [10] [PERSON] and [PERSON]. 2015: Guidelines for sustainable rehabilitation of the rural architecture, in \"Vernacular architecture: toward a sustainable future\", Mileto, Vegas, [PERSON] (Eds), Taylor & Francis Group, London, pp. 531-536
* [11] [PERSON] 2017: Compatibilita, in [PERSON], [PERSON]; Abbeceddario minimo. Cento voci per il restauro. Altralinea, Firenze.
* [12] [PERSON]. 2018. Conservation of Vernacular Duellings. Matters of Authenticity and Sustainability. In: [PERSON], [PERSON], [PERSON]. (eds) 10 th International Symposium on the Conservation of Monuments in the Mediterranean Basin. MONUBASIN 2017. [PERSON]
* [13] [PERSON], 2004: Conservazione e accessibilita. Il superamento delle barriere architectoniche negli edifici e nei siti storici. Arte Tipografica editrice, Napoli.
* [14] [PERSON]. 2020. Transformation Versus Preservation of Vernacular Architecture in Bali: A Lesson from _Bali Ago_ Villages. In: [PERSON], [PERSON]. (eds) Reframing the Vernacular: Politics, Semiotics, and Representation. [PERSON]
* [15] [PERSON], 1976: \"Fortificazioni colletitive nei villaggi medievali dell'ita Italia: ricetti, villeforti, recinti, \"Bollettino bibliografico storico solaplino\", XXIV, pp. 527-617.
* [16] [PERSON] 2018: L'architetura come opera aperta. Il tema dell'uso nel progetto di conservazione, ArcHisto REXTRA 2.
* [17] [PERSON], 2007: Architettura e architeture, in [PERSON], [PERSON], [PERSON] (a cura di): Antico e Nuovo. Architettura e Architetture, Atti del Convegno (Venezia 31 marzo-3 aprile 2004), Il Poligrafico, Venezia, p. 23.
* [18] [PERSON], [PERSON], 2015: 0 km conservation, \"Vernacular architecture: toward a sustainable future\", [PERSON], Garcia Soriano & Cristini (Eds), Taylor & Francis Group, London, pp.737-740
* [19] [PERSON], [PERSON], 2007: Renovar conservando. Manual para la restaurcion de la arquitectura rural del Rincon de Ademuz. Manocomunidad Rincon de Ademuz.
* [20] [PERSON], 1978: I ricetti, difese colletitive per gli uomini del contado nel Piemonte medioveale. Edialbra, Torino, pp. 154-161.
|
isprs
|
ARCHITECTURAL HERITAGE AT RISK: THE CASE OF THE MAGNANO HAMLET (PIEDMONT, ITALY)
|
F. Fratini, M. Mattone, S. Rescic
|
https://doi.org/10.5194/isprs-archives-xliv-m-1-2020-841-2020
| 2,020
|
CC-BY
|
isprs/31c53816_f3ef_4798_8bce_6b384c4565da.md
|
A Novel Procedure for Generation of SAR-Derived ZTD Maps for Weather Prediction: Application to South Africa Use Case
[PERSON]
Corresponding author
[PERSON]
1 Politecnico di Milano - Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB), Milan, Italy (monialeisa.molinari, marco.manzoni, naomi.petrushevsky, andrea.montigtamari)@polimi.it
[PERSON]
1 Politecnico di Milano - Dipartimento di Ingegneria Civile e Ambientale (DICA), Milan, Italy (giowana.venuti, agostinoniyonkuru.mreeni, alessandra.mascitelli)@polimi.it
[PERSON]. Guarnieri
1 Politecnico di Milano - Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB), Milan, Italy (monialeisa.molinari, marco.manzoni, naomi.petrushevsky, andrea.montigtamari)@polimi.it
[PERSON]
2 Politecnico di Milano - Dipartimento di Ingegneria Civile e Ambientale (DICA), Milan, Italy (giowana.venuti, agostinoniyonkuru.mreeni, alessandra.mascitelli)@polimi.it
[PERSON]
2 Politecnico di Milano - Dipartimento di Ingegneria Civile e Ambientale (DICA), Milan, Italy (giowana.venuti, agostinoniyonkuru.mreeni, alessandra.mascitelli)@polimi.it
[PERSON]
3 Centro Internazionale in Monitoraggio Ambientale Research Foundation, Savona, Italy - [EMAIL_ADDRESS]
###### Abstract
The knowledge of tropospheric water vapor distribution can significantly improve the accuracy of Numerical Weather Prediction (NWP) models. The present work proposes an automatic and fast procedure for generating reliable water vapor products from the synergeic use of Sentinel-1 Synthetic Aperture Radar (SAR) imagery and Global Navigation Satellite System (GNSS) observations. Moreover, a compression method able to drastically reduce, without significant accuracy loss, the water vapor dataset dimension has been implemented to facilitate the sharing through cloud services. The activities have been carried in the EU H2020 TWIGA project framework, aimed at providing water vapor maps at Technology Readiness Level 7.
SAR, GNSS, ZTD, water vapor, weather prediction, TWIG +
Footnote †: Corresponding author
## 1 Introduction
Numerical Weather Prediction models (NWPMs) currently represent the most important tool for weather forecasting and provide crucial information for lives and property protection and conscious human activities managing. Enhancing the performance of these models, especially for the prediction of heavy convective precipitation events, is still a challenging task.
It has been demonstrated in the literature that the knowledge of tropospheric water vapor distribution can significantly improve the accuracy of NWPMs forecasts.
The assimilation in NWPMs of water vapor observations derived by meteorological and Global Navigation Satellite System (GNSS) stations represents a well-established and effective technique ([PERSON] et al., 2016), and several studies showed its positive impact on the prediction of precipitation events ([PERSON] et al., 1993; [PERSON] et al., 2004; [PERSON] et al., 2007). Although characterized by high temporal resolution, the GNSS observations are usually provided as sparse datasets due to the low density of available measurements. This means that they cannot comprehensively describe the spatial variations of water vapor, which in storm phenomena can occur within tens of meters ([PERSON], 2007).
Satellite observations can undoubtedly give a valuable and complementary contribution in this sense, as they can provide additional water vapor observations over broad areas. Of particular interest is the Synthetic Aperture Radar (SAR), which can exploit the SAR Interferometry (InSAR) technique to obtain water vapor distribution products characterized by high spatial resolution and high accuracy, about 2 mm ([PERSON] et al., 2016).
Experiments on the ingestion in NWPMs of water vapor data derived by ENVISAT-ASAR satellite are reported by [PERSON] et al. (2015) and [PERSON] et al. (2016). The authors observed an improvement in forecasting weak to moderate precipitation.
In recent years, researches are increasing thanks to the availability of imagery from Sentinel-1 satellites of the European Space Agency (ESA). In orbit since 2014-2016, the satellites have the advantage to guarantee the free availability of time series of SAR data with an unprecedented time coverage, i.e. up to 1-3 days. The temporal high-resolution of water vapor variability information is crucial in obtaining benefits from the assimilation process.
Several investigations have been performed taking advantage of Sentinel-1 data ([PERSON] et al., 2018; [PERSON] et al., 2019; [PERSON] et al., 2019; [PERSON] et al., 2020), confirming the InSAR water vapor products potential in enhancing NWP models accuracy.
The present work fits within this context and has been carried out in the framework of TWIGA, the EU Horizon2020 project aimed at providing currently unavailable geo-information on weather, water, and climate for sub-Saharan Africa.
Here, an automatic and fast procedure integrating Sentinel-1 SAR imagery data with in situ sensors observations has been designed and tuned to obtain highly accurate, dense, and wide water vapor products, also called Zenith Total Delay (ZTD). These products will be supplied to local meteorological services to be ingested by NWP models and improve the prediction of heavy rainfall.
The paper is structured as follows. In Section 2, the adopted methodology for water vapor product generation is illustrated. Section 3 provides information about the use case area, the processed SAR imagery, and the obtained results. In the last section, conclusions are reported.
## 2 Method
The main steps for generating absolute ZTD maps from a stack of Single Look Complex (SLC) Sentinel-1 SAR images are outlined in Figure 1. The procedure combines into a unique processing chain both standard techniques and some novel contributions to enhance the quality of the estimated water vapor maps.
The proposed methodology exploits the availability of a stack of complex SAR images acquired over the same area at different epochs to estimate the difference in the optical path between radar and ground targets ([PERSON] et al., 2007).
The computed optical path variation can be due to a few contributions: i) the possible displacement of the target; ii) the different satellite view angle between the acquisitions, which generates local topographic effects; and iii) the changes in atmospheric condition (mainly the troposphere and the ionosphere).
As we are interested in estimating the tropospheric effect only, the other contributions, if not negligible, must be removed. This step is performed in the pre-processing phase, which prepares the SAR image stack for the interferometric process.
### Pre-processing
The pre-processing is mainly performed by exploiting existing tools in the free and open-source software SNAP (Sentinel Application Platform). It is based on well-known techniques such as co-registration ([PERSON] et al., 2006) and debursting ([PERSON], [PERSON], 2006).
The topographic contribution to the interferogram phases is compensated by using precise orbit information and a DEM of the scene.
The ionospheric contribution is estimated and then compensated by a novel method based on the split spectrum technique ([PERSON] et al., 2016). The estimation is performed jointly over the whole stack of images, reaching in this way a better accuracy.
### ZTD map estimation
The processing phase includes both the interferometric technique and all the steps required to obtain absolute ZTD maps.
The estimation of differential ZTD maps is usually performed using techniques that exploit the Permanent Scatterers (PS). The drawback of these approaches is that PS density is usually low in rural areas. Thus, the final water vapor products cannot be as dense as required. Our procedure implements the phase linking method ([PERSON], [PERSON], 2008) to extract differential ZTD dense maps by exploiting both PS and Distributed Scatterers (DS).
Given a stack of \(N\) coregistered SAR images, the phase linking technique generates all the possible N(N-1)2 interferograms.
The procedure iterates by minimizing, in each small window centered on the k\({}^{\text{th}}\) target, the following cost function:
\[\begin{split}\hat{\phi}_{n}(P_{k})=\arg\min Re\big{(}\sum_{m }\sum_{n}\sum_{k}I\big{(}P_{k-m,k-n}\big{)}\cdot\\ e^{\big{/}(\hat{\phi}_{n}-\hat{\phi}_{m})\big{/}\big{)}},\end{split} \tag{1}\]
where \(\hat{\phi}_{n}(P_{k})=\text{estimated phase}\)
\(1=\text{interferogram}\)
\(P_{k}=\text{k-th target}\)
Notice that the cost function involves only phase differences. Therefore, a constant can be added to all the terms \(\hat{\phi}_{n}(P)\) without changing the solution. In other words, for each pixel P\({}_{\text{th}}\) all the phases can be estimated but for a constant phase offset.
After applying the unwrapping technique ([PERSON], [PERSON], 1998), the estimated differential ZTD products need to be further enhanced. Due to the lack of millimetric accuracy in the satellite orbit determination, the pre-processing phase cannot completely remove the topographic contribution.
The joint exploitation of SAR data and GNSS atmospheric products allows for the correction of this error. It is worth noting that the synergic use of satellites (SAR) and in-situ (GNSS) observations is one of the significant innovations in TWIGA paradigm.
The orbital parameters are estimated by matching the SAR phases evaluated in correspondence of the GNSS and the GNSS
Figure 1: Implemented procedure to generate ZTD maps from a stack of SAR data.
pseudo-range measured at the time of the SAR acquisitions ([PERSON] et al., 2020).
The matching is performed in a robust L1 norm to provide the needed orbital error parameters.
The subsequent step of the procedure still involves GNSS measures. Since InSAR is a differential technique both in space and in time, a constant phase term is missing in all the images. Thus, GNSS observations are used to estimate the missing constants required to adjust the interferometric phase.
Once calibrated, the differential ZTD maps are then geocoded ([PERSON], [PERSON], 2019) and, if adjacent ZTD images exist, they are merged into a unique wide map using a mosaicking tool.
Finally, the differential ZTD maps are computed with respect to an initial map, the so-called _master_. To obtain absolute ZTD products the _master_ map has to be estimated. For this purpose, the map provided by the Generic Atmospheric Correction Online Service (GACOS) is exploited following [PERSON] et al. (2020).
### Compression/Decompression
The final aim of the procedure is to generate absolute ZTD products to be supplied to African stakeholders through cloud-based services, such as File Transfer Protocol (FTP). The main drawback is certainly related to the size of the GeoTIFF ZTD maps, which can reach the order of hundreds of MegaBytes per map. This problem has been overcome by implementing a compression method that converts GeoTIFFs to georeferenced JPEG.
Since JPEG is a lossy image compression technique, some issues arise in correspondence of image discontinuities, e.g. at the edges between _data_ and _no data_ or when a sharp difference in adjacent pixels values exist. Hence, before the conversion, some pre-processing is carried out to face these artifacts. Firstly, a data interpolation is performed for filling regions with no _data_ values. Then, a multidimensional Gaussian filter is applied to smooth sharp differences in pixel values.
After that, the GeoTIFF pixel values are rescaled in the range 0-255, supported by JPEG format, and the image is converted to JPEG format. A _worldfile_ containing information for georeferencing is generated. The information for back-conversion and rescaling is provided in an XML auxiliary file. Moreover, a GIF mask file contains the _no data_ values original distribution.
The JPEG compressed image can be easily stored, transmitted to the cloud, and delivered to users.
## 3 Results and Discussion
### Use case
The proposed procedure has been exploited to generate absolute ZTD maps for the whole South Africa country. Figure 2 shows the footprints of the Sentinel-1A frames (blue boxes) which ensure the use case area coverage: five frames belonging to ascending relative orbit 43 were selected. Table 1 provides the main characteristics of the SAR dataset used for the study.
The temporal period of images has been chosen based on a severe storm event that hit the South-Africa, particularly the Johannesburg and Pretoria cities, between 22 and 24 March 2018. For each Sentinel-1 frame, a stack of 6 images at 12 days revisit frequency related to timespan February-April 2018 has been considered. Of particular interest is the imagery of 21 March 2018, which can provide useful information for NWP models since it has been acquired just a few hours before the storm event. It is worth noting that the short temporal extent of the stack is required to avoid the phase noise induced by temporal decorrelation of the distributed target as well as the impact of possible subsidences.
\begin{tabular}{|l|c|} \hline
**Dataset characteristics** & **South Africa use case** \\ \hline Sensor & S1A \\ Relative Orbit & 43 \\ & 1082, 1087, 1092, \\ Frames & 1097, 1102 \\ Pass & Ascending \\ Number of images & 6 \\ First image acquisition date & 2018-02-01 \\ Last image acquisition date & 2018-04-02 \\ Revisiting time & 12 days \\ \hline \end{tabular}
As highlighted in Figure 2, an acceptable number of GNSS South African Network stations is available within the area of interest outlined by SAR frames.
\begin{table} \end{table}
Table 1: Information related to Synthetic Aperture Radar (SAR) imagery collected for South Africa use case.
Figure 2: South Africa use case: SAR frames and GNSS stations.
### Differential and absolute ZTD maps
The SAR-derived differential ZTD maps have been computed by considering an estimation window of 400 m x 400 m, which is enough to cope with decorrelation noise.
The precise orbit correction and the phase offset compensation take advantage of the South African Network GNSS observations, which have been processed to have ionosphere-free ZTD measurements.
Figure 3 proposes a comparison between SAR-derived and GNSS differential ZTD values computed for the interval time 25-02-2018 and 09-03-2018. Results prove the procedure's capability to obtain reliable ZTD measurements. In fact, a correlation value of 0.984 is observed even before applying precise calibration. Once this correction is performed, the correlation index increases to 0.993.
Figure 4 shows an example of absolute ZTD map obtained after geocoding, mosaicking, and master reconstruction steps. The long strip covers an area of about 850x250 km\({}^{2}\).
### Compression/Decompression method: first experiments
A single ZTD map of South Africa use case in Packbits GeoTIFF format is 170 MB in size. After the compression method, the JPEG image is about 0.5 MB.
The decompression method recovers the original GeoTIFF, with some errors due to the Gaussian smoothing and the JPEG compression. Figure 5 shows the histogram of differences between the original and the smoothed GeoTIFF, with pixel values converted in millimeters. The mean error is zero, while the standard deviation is about one millimeter, which is below the ZTD accuracy discussed in the introduction. There are peaks in changes of about 20 mm that can be attributed to the smoothing effect in correspondence of discontinuities.
Figure 6 reports the histogram of differences between the smoothed GeoTIFF and the recovered one. It is worth noting that the final map can be affected by additional minor errors introduced by the rescaling operation during compression. The rescaling implies to round float value to an integer, and this operation cannot be reversed.
Figure 4: InSAR absolute ZTD map (09 March 2018 – 21
march 2018) obtained by merging five frames.
Figure 5: Histogram of the differences between the original GeoTIFF and the smoothed one obtained during compression.
Figure 3: Correlation between SAR and differential ZTD reconstructed in 17 GNSS stations before and after residual orbit compensation.
## 4 Conclusions
The presented work proposes a novel procedure developed for the generation of ZTD products from the synergic use of SAR and GNSS data. The procedure has been implemented in the framework of the TWIGA project with the aim to provide to African stakeholders a fast, automatic, and easy-to-replicate method to retrieve useful geo-information, currently unavailable, for weather forecasting and better management of water resources.
ZTD products have been derived for the use case of South Africa by taking advantage of the ubiquitous and freely available time series of Sentinel-1 SAR data. A compression method to drastically reduce ZTD file size has been developed to allow data delivery and storage. Further experiments need to be performed to evaluate the effects of compression on product quality.
From the computational side, a ZTD product covering an area of 850 km x 250 km was generated in a reasonably fast time, i.e. about half of an hour. The STAMPS software, which works only with PS, obtains water-vapor maps in several hours.
As a final remark, the procedure has been implemented by making the maximum use of open-free tools, mainly the ESA SNAP tool and several Python libraries (numpy, scipy, rasterio, Python-gdal).
**Funding:** TWIGA has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement No 776691
## References
* [PERSON] and [PERSON] (2006) [PERSON] [PERSON], [PERSON] [PERSON], 2006. TOPSAR: Terrain Observation by Progressive Scans. _IEEE Transactions on Geoscience and Remote Sensing_, 44 (9), 2352-2360.
* [PERSON] et al. (2007) [PERSON] [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], 2007. InSAR Principles: Guidelines for SAR Interferometry Processing and Interpretation. TM-19. ESA Publications, The Netherlands.
* [PERSON] and [PERSON] (1998) [PERSON], [PERSON], [PERSON], 1998: _Two-dimensional phase unwrapping: Theory, algorithms, and software_. New York: Wiley.
* [PERSON] and [PERSON] (2008) [PERSON], [PERSON] [PERSON], 2008. On the Exploitation of Target Statistics for SAR Interferometry Applications. _IEEE Trans. Geosci. Remote Sensing_, 46, 3436-3443.
* [PERSON] et al. (2016) [PERSON] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], 2016. Review of the state of the art and future prospects of the ground-based GNSS meteorology in Europe. _Atmos. Meas. Tech._, 9, 5385-5406.
* [PERSON] et al. (1993) [PERSON], [PERSON], [PERSON], [PERSON], 1993. Assimilation of precipitable water measurements into a mesoscale numerical model. _Mon. Rev. Mex._, 121, 1215-1238.
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2019. A Synergistic Use of a High-Resolution Numerical Weather Prediction Model and High-Resolution Earth Observation Products to Improve Precipitation Forecast. _Remote Sens._, 11 (20), 2387.
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2020. Joint Exploitation of SAR and GNSS for Atmospheric Phase Screens Retrieval Aimed at Numerical Weather Prediction Model Ingestion. _Remote Sens._, 12, 654.
* [PERSON] et al. (2007) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2007. Influence of GPS Precipitable Water Vapor Retrievals on Quantitative Precipitation Forecasting in Southern California. _Journal of Applied Meteorology and Climatology_, 46(11), 1828-1839.
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2016. Three-Dimensional Variational Assimilation of InSAR PWV Using the WRFDA Model. _IEEE Trans. Geosci. Remote Sens._, 54, 7323-7330.
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2018. Assimilating InSAR maps of water vapor to improve heavy rainfall forecasts: A case study with two successive storms. _Journal of Geophysical Research: Atmospheres_, 123(7), 3341-3355.
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], J., 2020. Continuous Multitrack Assimilation of Sentinel-1 Precipitable Water Vapor Maps for Numerical Weather Prediction: How Far Can We Go With Current InSAR Data?_Journal of Geophysical Research: Atmospheres_, 126(3), e2020 JDO34171.
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], et al., 2020. On the Definition of the Strategy to Obtain Absolute InSAR Zenith Total Delay Maps for Meteorological Applications. _Frontiers in Earth Science_, 8, 359.
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2019. InSAR meteorology: High-resolution geodetic data can increase atmospheric predictability. _Geophysical Research Letters_, 46(5), 2949-2955.
* [PERSON] et al. (2004) [PERSON], [PERSON], [PERSON], [PERSON], 2004. Data assimilation of GPS precipitable water vapor into the JMA mesoscale numerical weather prediction model and its impact on rainfall forecasts. _J Meteorol. Soc. Jpn. Ser._, 82(1B), 441-452.
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], et al. 2015. InSAR water vapor data assimilation into mesoscale model MM5: Technique and pilot study. _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, 8(8), 3859- 3875.
Figure 6: Histogram of the differences between the smoothed GeoTIFF and the recovered one.
[PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2006. Geometrical SAR image registration. _IEEE Trans. Geosci. Remote Sens._, 44, 2861-2870.
* [PERSON] and [PERSON] (2019) [PERSON], [PERSON], 2019. Guide to Sentinel-1 Geocoding. University of Zurich, Report UZH-S1-GC-AD.
* [PERSON] (2007) [PERSON], 2007. _Parameterization schemes: Keys to understanding, numerical weather prediction models_. Cambridge, UK: Cambridge University Press.
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2016. High-spatial-resolution mapping of precipitable water vapour using SAR interferograms, GPS observations and ERA-Interim reanalysis. _Atmos. Meas. Tech._, 9, 4487-4501.
|
isprs
|
A NOVEL PROCEDURE FOR GENERATION OF SAR-DERIVED ZTD MAPS FOR WEATHER PREDICTION: APPLICATION TO SOUTH AFRICA USE CASE
|
M. E. Molinari, M. Manzoni, N. Petrushevsky, A. M. Guarnieri, G. Venuti, A. N. Meroni, A. Mascitelli, A. Parodi
|
https://doi.org/10.5194/isprs-archives-xliii-b3-2021-405-2021
| 2,021
|
CC-BY
|
isprs/7cc2c035_aa10_434a_a1f9_b692dad218cb.md
|
**THE ROLE OF PHOTogrammetry AND REMOTE SENSING ON DETERMINING THE FOREST BOUNDARIES AND UNAUHTORIZED BUILDINGS IN Turkey**
**(A SAMPLE AREA: BEYKOZ (ISTANBUL))**
_Dr. [PERSON]\({}^{4}\), Asosc. Prof. Dr. [PERSON]\({}^{2}\)_
\({}^{1}\)Beyko Municipality, Istanbul-Turkey; [EMAIL_ADDRESS]
\({}^{2}\)Kocaeli University, Karamursel MYO, Izmit-Turkey; [EMAIL_ADDRESS]
**Session: PS ThS 19**
**KEY WORDS:** Forests, Urbanization, Squarter and Unauthorized Buildings, Cadastre and Registered Record, Forestry Cadastre, Photogrammetry, Remote Sensing
## Abstract
The 26% (201,992.96 km\({}^{2}\)) of Turkey's land area is (769,604 km\({}^{2}\)) covered by forests. It has been possible to determine the boundaries of 4/5 of the forests of Turkey in 66 years by forest cadastre which was introduced in 1937. But only 1/4 of the demarcated area could be registered into the land registry. Forests are occupied by unauthorized buildings and squatters as a result of creeping operations. Between the years of 1937 and 2003, an area of 4734.19 km\({}^{2}\) has been reassigned as nonforest area because of loosing its forest characteristics. The advantages of remote sensing and photogrammetry technologies have not been taken sufficiently in forestry cadastral works.
In this paper, the size of forest plunder is analyzed by associating it with the evolution of urbanization in the course of time. In this context, the matters concerning forestry demarcation and problems in forestry cadastre, and the activities causing unauthorized housing and illegal forestry usage are dicussed. Furthermore, the role of remote sensing and photogrammetry in determining forest boundaries in Turkey,both on land and by forestry cadastral maps, is examined.
The forests located in the north of Istanbul, which is a very large city, serve as the lungs of the city. Especially after the recent earthquake experienced in \"MARMARA\" Region, the pressure on forests has increased because of the jeopardy created by the descending ground. In this study, Beykoz, which is one of the 32 townships of Istanbul Province and located in the north of the city with 313 km\({}^{2}\) of area, 80 % of which is covered by forests, has been chosen as a sample area, and the condition of forests in this region has been analyzed.
## 1 Introduction
In Turkey, the \"forestry cadastre\" works and \"ownership cadastre\" works are carried out by different institutions and by using different technical (map production) standards.
The forestry cadastre, defined as \"demarcation of forests and their registration into the land registry in the name of'state' as a public property\", is carried out in Turkey by forestry cadastral committees formed by five members appointed by the Ministry of Environment and Forestry. These committees functioning in subordination to the General Directorate of Forestry perform their works in accordance with the Forestry Law No. 6831 dated 1956 and the Implementation Regulation dated 11 April 1990.
The reason why the forestry cadastre is carried out separately from the ownership cadastre carried out pursuant to the Cadastre Law No. 3402 dated 1987 in line with the principles set forth in the Turkish Civil Code, is explained on the grounds that \"determining whether or not an area is qualified as a forest land requires special expertise\" (Decision of the General Board of Law of Supreme Court of Appeals, dated 30 September 1981). On the grounds of this opinion, the General Directorate of Land registration and Cadastre responsible for carrying out the ownership cadastre, as well as the map engineering services have been excluded in carrying out the forestry cadastral works, and all works have been realized under the guidance of forest engineers. Consequently, the forestry cadastral works have not been successful.
In this paper, the reasons of not being successful in demarcation of forests and their invasion by unauthorized buildings have been studied by aerial photographs and photogrammetric data belonging to the township of Beykoz located within Istanbul Province.
## 2 Surveying techniques used in Turkey in forestry demarcation
In forestry cadastral works in Turkey, priority has been given to using the ground surveying methods (Regulation 2/B, Article 48). Although the Forestry Law No. 6831 stipulates that the aerial photographs required in forestry demarcation should be taken or cause to be taken by theMinistry of Environment and Forestry (Article 9), in the relevant Implementation Regulation it is stated that the photogrammetric method may be employed providing the forest boundary points or connection points are market with \"aerial signals\" (Regulation 2/B, Article 48).
According to the Forestry Law and relevant Implementation Regulation, the techniques of forestry demarcation are as follows:
1. A forest boundary number shall be assigned to each point where the forest boundary line is interrupted. At the forest boundaries, a concrete or other similar stationary marker is placed at every 500 meters to establish the forest boundary points. In settlement area or in their surrounding areas, these points shall be established at every 250 meters (Regulation 2/B, Article 48),
2. In areas provided with a map of 1/5000 or greater scale, if the forest boundaries have been shown on the map, such boundaries shall be exactly complied with, without making any further surveying (Regulation 2/B, Article 48),
3. Where the forest boundaries coincide with the boundaries of a sea, lake, road or the like, no boundary point shall be established. (Regulation 2/B, Article 50),
4. The forest boundaries are measured by the series polygon polar method using instruments sensitive to 1\({}^{\circ}\), and their coordinates are calculated with reference to the point of connection (Regulation 2/B, Article 51),
5. The forestry cadastral maps are prepared at a scale of 1/5000, and this scale is taken as a basis in the mapping standards used (Regulation 2/B, Article 52), and
6. The maps showing the forest boundaries are reduced to the scale of 1/25000 to obtain the township forestry maps, and all subsequent implementations shall be monitored on the basis of such maps (Regulation 2/B, Article 53).
If the forestry cadastral works were carried out according to the Regulation dated 31 August 1988 Concerning the Production of Large-scale Maps used in the ownership cadastral works, it would be required that;
1. The difference between the two measurements of edges made using electronic distance measuring instrument should not exceed \(\pm\)5 cm (Article 78),
2. The angle measurements should be made by instruments capable of direct measuring of at least 2\({}^{\circ\circ}\) (Article 52),
3. The root-mean-square error (RMSE) of angle measurements should not exceed \(\pm\) 5 cm (Article 55), and
4. In works carried out using the photogrammetric method, the measurements be made by analytical plotting instruments having an analytical measurement accuracy of less than \(\pm\) 3 micrometers (Article 186).
It is, therefore, seen that the methods of measurement and technical (mapping) standards employed in forestry cadastre require lower values as compared to ownership cadastre.
Although the forestry cadastral works in Turkey had started first in 1937 with the enactment of Law No. 3116, the 4/5 of the forests was demarcated during the past 66 years (1937-2003), but only 1\(\%\) of the total demarcated forest land could be registered in the land registry (SINMAZ and KARTATS 1995; KOKTURK 1999; DPT (State Planning Organization) 1995; and DPT 2001, Table 1).
The forests which cover 26% of Turkey's total and area (201 992.96 km\({}^{2}\)) have been faced with invasions and illegal usage despite the protection provided by the provisions of Turkish Constitution and the Forestry Law. As a result, about 23% (4 374.19 Km\({}^{2}\)) of forests has been excluded from the forest land. The main reason why the forestry demarcation works have been unsuccessful is because the topic of forests has been frequently used as a material of political concern and also because of the changes experienced in the forestry laws. Another reason was the use of inadequate measurement techniques in the production of forestry maps. It has been observed \"that the information belonging to the presumably demarcated forests were obtained using different surveying methods and different coordinate systems, that such information was not sufficient to effect a re-remanation of forests in the field, and also that it was not possible to combine all that information under a single coordinate system\". It has been further recognized that such information and documents were not qualified as to enable the registration of forests into the land registry. It can also be said that the carrying out of the works related to the \"demarcation\" of forests and \"**determination of their boundaries on the field (limitation)**\" as well as \"**the measurement of forest boundaries determined in the field**\", \"**computation**\", \"**plotting**\" and \"**staking out**\" by the forest engineers since 1937 up to now has played a role in obtaining such results. This process has been brought to an end as a result of the amendment made in the Forestry Law by Law No. 4999 dated 05 November 2003 (Article 10), and it was decided that \"with regard to forests whose cadastral works have been completed, the surveying engineers will be authorized and responsible for carrying out the works related to the production of maps, measurement, computation, plotting and staking out\". Important developments are anticipated in determining the boundaries of forests and the unauthorized buildings as a result of taking such a decision after 66 years that the
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Periods** & **Total Area in** & **Forests with** \\ & **Turkey** & **Completed** \\ & **Requiring** & **Cadastre (km\({}^{2}\))** \\ & **Forestry** & \\ & **Cadastre** & \\ & **(km\({}^{2}\))** & \\ \hline
1937-1984 & 201992.96 & 92385.09 \\ \hline
1985-1989 & \({}^{*}\) & 13890.40 \\ \hline
1990-1994 & \({}^{*}\) & 23000.00 \\ \hline
1995-2000 & \({}^{*}\) & 21094.80 \\ \hline
2001-2002 & \({}^{*}\) & 12314.01 \\ \hline
**1937-2002** & \({}^{**}\) & **162 684.30** \\
**(01.01.2003)** & & **(80.5\%)** \\ \hline \end{tabular}
\end{table}
Table 1: Forestry Cadastre in Turkey forestry maps will be produced by surveying engineers. We may say that, by also making use of photogrammetric and remote sensing techniques, successful results will be achieved in ensuring the security of forests.
## 3 Documents taken as a basis in determining the boundaries of forests in Turkey
### Procedure of Demarcation and Documents used
In areas where forestry demarcation has not been realized, the forestry characteristics and legal status of a particular piece of land must be decided on in accordance with the Laws No. 3116, No. 4785, and No. 5658. The Law No. 3116 dated 1937 had defined only the forests belonging to the State. Then all forests were nationalized by Law No. 4785 dated 1945, but by a subsequent law, namely the Law No. 5658 dated 1950, some of the nationalized forests were returned to their former owners. The Law No. 3116 was the first law stipulating the realization of cadastral works for state forests, while a number of laws introduced later, namely the Laws No. 4785, No. 6381, dated 1956, No. 2896 dated 1983 and No. 3302 dated 1986, enabled the remaking of forestry cadastre in a study area ([PERSON] and [PERSON], 1995: 50).
With every change of law, a redetermination of areas qualified as a forest land or not we carried out. The lack of maps and aerial photographs at the time of these works hindered the cadastral activities. The forestry cadastral studies were traced back down to the year 1937, trying to determine wheter a particular area was qualified as a forest land or not starting from that date, and this practice had an unfavorable effect on demarcation works.
Before starting to carry out the forestry cadastre of a particular area, the necessary information and documents are obtained from the following organizations and authorities:
1. State Forestry Enterprise Directorships,
2. Cadastre and Land Registry Directorships, and,
3. Highest Civil Authorities in Provinces and Townships.
After obtaining the necessary information and documents, the documents furnished by relevant persons for areas within the forests or in the neighborhood are examined. The examination of these documents furnished by individuals for determination of ownership is another factor causing a delay in realization of forestry cadastral works. The maps of forests whose cadastral works have been completed are then signed by the Chairman of the Committee, and the protocols prepared are announced to the public for a period of 6 months. An application may be made to the appointed Court within 6 months from the date of said announcement, to raise an objection to the protocols and decisions taken with regard to demarcation. Said period is defined as the \"period of forfeiture\". The cases of objection are brought against the Ministry of Environment and Forestry and the General Directorate of Forestry. Following the completion of cadastral works, the files relevant to these works are forwarded to the General Directorate of Forestry for correction of any \"**formal and legal shortcomings**\", and after such corrections are made, the decisions taken are put into implementation with the approval of relevant Governor. The forests belonging to the State, whose ownership has become final by completion of their cadastral works, are then registered into the land registry in the name of Treasury, by the Cadastre and Land Registry Directorships without collection of any fees, taxes or duties.
In the Forestry Law, Implementation Regulation and judicial decisions, the \"**base maps**\" **\"**aerial photographs**\" and **\"**Amenagementplans**\" are considered as final evidence in disputes related to forests.
#### 3.1.1 Base Maps
The term defined in the Forestry Law as base maps with a scale of 1/25 000 and larger means \"the maps having a scale of 1/25 000 and 1/5 000, because when we examine the mapping services in our country with respect to scales and purposes, we observe that two series of maps are produced as based on our national geodetic network. These are **1)** maps with a scale of 1/25 000 and **2)** maps with a scale of 1/5 000.
* **Maps with a scale of 1/25 000:** Upon declaration of Republic of Turkey (29 October 1923), these maps were started to be produced by the General Command of Mapping, with the purpose of first the defense of the country, and were completed with the production of 5547 maps covering the whole Turkey.
* **Maps with a Scale of 1/5 000:** These maps are produced by the General Command of Mapping and the General Directorate of Land Registry and Cadastre using the photogrammetric method, to cover a large area of 500 thousand square kilometers.
#### 3.1.2 Aerial Photographs
The term defined as \"aerial photograph\" in the Forestry Law and judicial decisions means \"the photographs used in the production of maps with a scale of 1/25 000 and 1/5 000\".
#### 3.1.3 'Amenagement' Maps
The term 'amenagement' is specific to the forestry sector. The process of planning the operations of forestry enterprises is called the \"amenagement\" plan\" in the language of forestry ([PERSON] and [PERSON], 1995:83) The use of an 'amenagement' plan, will ensure the proper organization of forestry enterprises so that they will operate according to a plan.
### Evaluation of Documents used in Determining the Forest Boundaries
In disputes related to forests in Turkey, an'amenagement' plan alone is not sufficient to indicate whether an immovable is qualified as a forest or not. In addition to it, an old dated base map or aerial photographs must also be used. The term \"base map\", which is considered as a final evidence in the Forestry Law and relevant Regulation as well as in judicial decisions, is not used correctly. The judicial decisions stating that the forest boundaries are determined \"by application of a base map with a scale of 1/25 000\" are incorrect from a technical point of view. The marking of forest boundaries on a map of 1/25 000 scale is contrary to the cadastral technical standards. Such practices are not in agreement either with the technical standards stipulated in \"**Regulation Concerning the Production of Large-scale Maps, dated 30 January 1988**\", taken as basis in ownership cadastral works. Similarly, the evaluations such as \"**in the event of a doubt regarding the prior qualification of an immovable in forestry disputes, the aerial photographs of that region should be used, and aerial photographs should be considered among indispensable evidences in forestry cases.\" are also wrong from a technical point of view. In fact, it would be better to adopt the term \"orthophoto map of 1/5 000 scale\" instead of the term of aerial photograph, and the term \"standard topographic map of 1/5 000 scale with cadastral overlay\" instead of the term of base map.**
The use of inadequate surveying techniques and documents in forestry demarcation activities prevents the registration of forests into the land registry with information so obtained. Such information and documents insufficient for registration into land registry create important problems.
The forestry demarcation works carried out by forestry cadastral committees but could not be registered into the land registry should be provided with an opportunity for implementation. With the amendment make in the Forestry Law No.6831 by Law No.4999 dated 05 November 2003, if any \"**technical errors**\" such as errors of area due to reasons other than changes in qualification and ownership, and arising from staking out, measurement, plotting and computations, are observed in areas whose forestry \"**limitation**\" or \"**cadastre**\" has been completed and become final by announcement, such errors are allowed to be corrected by the **forestry cadastral committees**, under the knowledge and supervision of General Directorate of Forestry This change realized in the Forestry Law after 66 years offers an important opportunity with regard to imparting registration capability to those remarcation documents which could not be registered into the land registry until now.
## 4 A Sample Area: Beykoz (Istanbul)
With regard to preservation and ensuring the security of forests and other vegetative cover in our country, the province of Istanbul is considered as the first area requiring the taking of necessary measurements in this respect. There are many reasons for this. The rapid population increase and unhealthy urbanization are the most important ones of these reasons. Istanbul is Turkey's most populated province.
About 38% (2,164 km\({}^{2}\)) of Istanbul's total area is covered with forests. The 46% (1,004 km\({}^{2}\)) of forest areas is located on the Anatolian side while 54% (1,160 km\({}^{2}\)) on the European side. The forest areas in both sides are most dense in the north of the city. The population of Istanbul has experienced a big increase after 1950s, and accompanied with internal migration, it has been threatening the forests. While this pressure on forests has been continuing, the problems of housing and settlement arising parallel with this development have remained largely unsolved. As a result, some 8% (183,3 km\({}^{2}\)) of Istanbul's forests have been occupied by squatter houses and unauthorized buildings, and forced to be excluded from the forest area. The failure in efforts to stop this undesirable process has caused the forests, natural vegetable cover and natural environment to be faced with the risk of extinction. As a result of failure in remarcation of Istanbul's forests and their registration into the land registry has further aggravated the existing problems.
About 1/5 (36 km\({}^{2}\)) of areas excluded from forest area in Istanbul is located in Beykoz township, known as one of the lungs of the city. The destruction of forests in Istanbul is concentrated in the northern part where the township of Beykoz is also located. The building of second Bosphorus Bridge (Fatih Sultan Bridge), and also the announcement of Beykoz as a safe area with regard to soil structure after the occurrence of Marmara Earthquake have acted as accelerating factors on this process.
The failures experienced in forestry demarcation works in Turkey is observed to have adverse effects on Istanbul, too. As reported by Istanbul Chamber of Commerce (ITO 2001), there are about a total of 1 million buildings in Istanbul. Only about 7% of these buildings were constructed by obtaining a construction permit, and according to the building projects approved by relevant local authorities. It follows that 93% of buildings in Istanbul are illegal. Special legal regulations have been introduced on both sides of Istanbul Strait with regard to development and planning. Despite the special provisions of Bosphorus Law No. 2960 dated 1983, the construction of squatter houses and illegal buildings could not be prevented. Due to the failure experienced in urban planning, rapid population increase and failure in solving housing problems have caused the urbanization to get out of control. On account of rapid population increase experienced in Istanbul and migration from urban areas to the city, the forests and agricultural production areas have been under a heavy pressure.
Therefore, both the development and implementation of urbanization plans in Istanbul have failed. Legal, technical and administrative measures need to be taken in order to stop the process leading to the destruction of forests, natural vegetative cover and green field.
The foremost technical measure required to be taken is to ensure the completion of Istanbul's ownership and foresy, cadastral works with no loss of time, and to determine the land use boundaries. To achieve this, the GPS, photogrammetry and remote sensing technologies must be put into service. When said technologies not taken advantage of until now are put into service, the ensuring of control in management of land would be highly facilitated. It would be possible to detect and monitor the squatter houses and illegal buildings by making periodical observations. When the photogrammetric and remote sensing data are employed as a base in the stages of analysis of development and planning, it will be possible to solve the existing problems within relatively short time.
It must be kept in mind that the term of \"cadastre\" refers to a concept of \"integrated recording system\" for immovables. The separation of cadastre into parts, and then distributing the powers and authorities on the basis of such parts, but without establishing the required relationship between the parts, will yield nothing but failure in the realization of integrated benefit expected from cadastre.
The prerequisite of preserving and development of forest wealth on global scale on countrywide basis and at city level is to ensure the safeguarding of forest boundaries. Such safeguarding must be ensured by the use of proper markings placed on the land and also by provision of necessary documents. The sustainability of safeguarding may be realized only if such documents are produced according to the requirements of cadastral technique.
It is at this point that the mapping, the profession of determining the place-related data, and its methods enters into the picture. It is imperative that the cadastral works should be carried out using the contemporary technologies in order to rermactate the forests, and to preserve and develop them both in quality and quantity. For this purpose, any and all advanced technologies (total station, GPS, photogrammetry, remote sensing) should be employed, either singly or in combination, depending on specific local conditions.
We cannot be contented with merely the marking of forest boundaries and their surveying by the use of advanced technologies. In our days it is considered imperative that the efforts in this respect should be directed towards the information systems and geographical information systems based on such data. The forestry information systems to be set up for forest wealth should be established as a result of interdisciplinary studies. As for all of our natural wealth, our forest wealth, too, may only be preserved and developed as a result of healthy cooperation to be established, within the scope of the essence of their existence, between the disciplines concerned and other relevant disciplines.
The development of this awareness must be achieved on global scale.
## References
* [1]
* [2] DiE., 2003, **2000 Genel Nufus Saymu, Nufusun Sosyal ve Ekonomik Nitellikeri** (General Census 2000, the Social and Economic Characteristics of Population), Republic of Turkey, Prime Ministry, State Statistical Institute (DIE), Publ. No. 2759, Ankara, 305 pp.
* [3]
* [4] DPT, 1995, **VII. Beg Yillik Kalkmma Plan Harita Tapu ve Kadastro Sektoru Ozel Intisas Komisyonu (OIK) Raporu** (Report of Special Expertise Committee (OIK) of Five-year Development Plan IV Mapping-Land Registry and Cadastre Sector), State Planning Organization (DPT) Publ. No. 2417, OIK; 476, Ankara, 101 pp.
* [5]
* [6] DPT, 2001, **VIII. Bes Yillik Kalkmma Plam Harita-Tapu ve Kadastro-Cografil Bilgi ve Uzaktan Algilama**
|
isprs
|
Spatio-temporal modelling of informal settlement development in Sancaktepe district, Istanbul, Turkey
|
Olena Dubovyk, Richard Sliuzas, Johannes Flacke
|
https://doi.org/10.1016/j.isprsjprs.2010.10.002
| 2,011
|
CC-BY
|
isprs/1caf5540_c3cf_406e_8731_f0299156ca09.md
|
# Aerial Surveying UAV Based on Open-Source Hardware and Software
[PERSON]
Dept. of Cartography and Geoinformatics, Eotvos Lorand University, Pazmany Peter setany 1/a, Budapest, H-1117 [EMAIL_ADDRESS]
###### Abstract
In the last years the functionality and type of UAV-systems increased fast, but unfortunately these systems are hardly available for researchers in some cases. A simple and low-cost solution was developed to build an autonomous aerial surveying airplane, which can fulfil the necessities (aerial photographs with very-high resolution) of other departments at the university and very useful and practical for teaching photogrammetry.
The base was a commercial, remote controlled model airplane and an open-source GPS/IMU system (MatrixPiot) was adapted to achieve the semi-automatic or automatic stabilization and navigation of the model airplane along predefined trajectory. The firmware is completely open-source and easily available on the website of the project.
The first used camera system was a low-budget, low-quality video camera, which could provide only 1.2 megapixel photographs or low resolution video depending on the light conditions and the desired spatial resolution.
A field measurement test was carried out with the described system: the aerial surveying of an undiscovered archaeological site, signed by a crop-mark in mountain Pilis (Hungary).
UAVs, Photogrammetry, Robotics, IMU, Mapping
## 1 Introduction
The use of UAVs (Unmanned Aerial Vehicles) in geomatics and photogrammetry was increased rapidly in the last few years, as well as the development of mathematical algorithms and sensors to achieve the more and more precise navigation and stabilization of UAVs. We can find examples for high-end systems, e.g. WeControl in Switzerland or MAVnici in Germany and very successful research projects ([PERSON], 2004 and [PERSON] et al., 2010), but these systems are hardly available for researchers in some cases. As we can see the slowly increasing tendency of the open-source software in the field of GIS, we can find a similar tendency in the development of UAV control systems. The goal of this paper is to present a brief overview about the available open-source control systems, to describe the building of an UAV, based on one of these systems and the first results of a field test, which was carried out with this low-cost system.
### Open-source control systems
The common properties of these systems are the freely distributed and modifiable software and hardware and the self-established community around them. The developing process can be really fast in this way, because in some cases (e.g. bad weather conditions for flying) the testing of a new feature is not possible by the programmer but other people can install this feature, test it in real conditions and give feedback to the programmer.
The first and oldest system is the Paparazzi which originally developed at ENAC University, in France. It is completely open-source because the firmware and all other program, like ground control station distributed on Ubuntu Linux OS (the current version on Mac OS X, too). The most advantages of the Paparazzi are the small circuit board which consists of the microcontroller, the GPS and the input/output ports, and the simple, easy-to-understand ground control station with built-in specific flight performance commands, such as Take-off, Land or Eight Figure and Survey Path. The main disadvantage of the older electronic boards was the lack of an accurate Inertial Measurement Unit while the stabilization of the airplane was performed by thermal sensors (thermopiles). These sensors perceive the position of the horizon, based on the different thermal radiation of the terrain and the sky. The accuracy of these sensors highly depends on the environment, because the relief (e.g. hills) and obstacles on the horizon (e.g. skyline of populated areas) can cause some inaccuracy in their measurement method. The newer versions of the main boards consist connectors for independent IMU units and make the stabilization possible without the use of thermopiles but it can be difficult to build other electrical components into a small or medium sized model airplane without interference. The price of the hardware is another disadvantage while the Paparazzi system is the most expensive from the discussed open-source systems. The project has a very detailed website with all necessary information about the installation and the configuration ([PERSON], 2011).
The other and younger systems are developed in the USA. The first one is the ArduPilot family which consists of the older ArduPilot and the newer ArduPilotMega systems. The ArduPilot was based on the Arduino compatible ATMega328 processor and supported less input/output ports and less features. The current version is the ArduPilotMega with the ATMega2560 processor with extended memory and more ports (I2 Cs, more servo ports). Firmware were written for both of two hardware but the ArduPilotMega firmware has more feature, such as in-flight settings tuning, real-time waypoint modification via two way telemetry ([PERSON], 2011). To compute the orientation of the aircraft, it uses an independent IMU unit (ArduIMU) and different type of GPSs (uBlox GS407 or MediaTek MT3329 units) for navigation. With the telemetry units (Xbee modules), it can send the data almost real-time to aGCS (Ground Control Station), which runs on our notebook. The current version of the firmware supports airplanes, traditional helicopters and multirotor platforms. The early version of the ArduPilot used the same sensors (thermopiles) for the stabilization than Paparazzi but later it became possible to connect the main board to an IMU unit. The current version of this system uses an independent IMU for the stabilization. The main board and this unit compose a side-by-side electric part. The disadvantage of this construction is the bigger size and in some cases this composition can be installed only on the outer part of the airplanes, not inside the plane which increase the chance to destroy the electric components during emergency landings. Detailed descriptions and manuals can be found on the 'official' website of the project ([PERSON], 2011).
The other project is slightly different because it uses a PIC microcontroller and different orientation algorithm than the previous ones. The system was built around the UAV Development Board, a small circuit board with the controller, gyroscopes and accelerometers ([PERSON], 2011). The detailed description can be seen in a later chapter of this paper.
## 2 The UAV Project
At the beginning of the project, the main goal was a low-cost platform which can capture aerial images with onboard sensors and demonstrate the capabilities of an open-source firmware. One another goal was to fulfil the necessities of other research at the university which rely on high-resolution aerial photographs. To these purposes, the development of the discussed UAV-system was begun half years ago.
### The platform
While the author wasn't experienced model pilot, the selected platform had to be an easily controllable and repairable fixed wing airplane with high payload capacity. A commercial model airplane (Easy Star) developed and manufactured by a German firm (Multiplex GmbH), fulfilled these requirements and several successful examples can be found where this model airplane was the used platform ([PERSON] et al., 2010 and DIVDrones, 2011). Other advantage of the fixed wing solution, compared to helicopters or multirotor platforms, was the longer endurance time above the target area ([PERSON], 2004 and [PERSON] et al., 2007).
The platform is characterized by a traditional configuration: fixed wing glider airplane, rear-mounted horizontal and vertical stabilizes but driven by a pusher, electric motor, placed on the top of the fuselage to avoid the breaking of the propellers in case of crash landing. Due to this configuration, it is more resistant to weather conditions, such as wind and gusts than other platforms. After some test flights without payload, the original wings were replaced by the bigger wings of an other model airplane (Easy Glider - manufactured by the same company) to increase the area of the wings (Figure 1.). The original wing size was equal to 24 dm\({}^{2}\) and with this improvement the surface of the wings, as well as the payload capacity and gliding capability, was nearly doubled (41,6 dm\({}^{2}\) according to the manuals, printed by Multiplex GmbH).
### Navigation and stabilization system
The used navigation and stabilization system is the result of an open-source project which created the hardware and the firmware also. The airplane equipped with the UAV Development Board (UDB) hardware (Figure 3.) which consists of a dsPIC30F4011 controller, an MMA7260 three axis accelerometer and two dual axis IXZ500 gyroscopes, thus it can be a three axis IMU (MatrixPilot, 2011). With some input/output ports, it can be a 'bridge' between the RC receiver and the servos, and provide connections with other devices, such as GPS and XBee telemetry units or a small onboard data logger (OpenLog). The compact design is its main advantage, while this board is the smallest among the discussed hardware and it consists all necessary electric components to function as an inertial measurement - controller unit and this is also the cheapest one. The board is designed by [PERSON] (MatrixPilot, 2011). The physical dimensions of the unit are equal to 70\(\times\)38\(\times\)25 mm and the weight is only 16 g, thus it is very easy to build into model airplanes.
\begin{table}
\begin{tabular}{|c|c|} \hline
**SPECIFICATIONS** & **Easy STAR XXL** \\ \hline \hline Wing span & 1.8 m \\ \hline Wing surface & 0.416 m\({}^{2}\) \\ \hline Length & 0.917 m \\ \hline Weight (without cameras) & 0.9 kg \\ \hline Payload capacity & Max. 0.25-0.30 kg \\ \hline Motor model & RAY B2835/16 \\ \hline Max. power & 220 W \\ \hline Propeller & APC-C - pusher \(6^{\circ}\)-\(4^{\circ}\) \\ \hline Stall speed & \(\sim\)5 m/s (\(\sim\)18 km/h) \\ \hline Max. range & 1.5 km \\ \hline Max. endurance time & 45 min \\ \hline \end{tabular}
\end{table}
Table 2: Technical specifications of the airplane
Figure 1: The ‘Easy Star XXL’ model airplane
The firmware (MatrixPilot) is programmed by [PERSON] and the other members of the MatrixPilot Team. It is easily downloadable from the website of the project ([[http://code.google.com/p/gentlenav/](http://code.google.com/p/gentlenav/)]([http://code.google.com/p/gentlenav/](http://code.google.com/p/gentlenav/))) under the terms of the GNU General Public License. The current version which runs on the most of UDBs is the MatrixPilot v3.1. The basic function of the code is the stabilization of automatic navigation (controllable by the operator) of the airplane along a predefined trajectory (up to 400 waypoints). Some additional feature of the firmware: it supports different type of airframes, such as traditional and V-tail airplanes or delta-wing (helicopter and multirotor code is still under development). It can stabilize a camera if servos are installed on the camera pad and with an additional servo it is also possible to trigger the camera on a predefined coordinate or continuously between two waypoints along the flight route. The main disadvantage of the code is the handling of the waypoint's coordinates. First of all, it has to be written in geographic formats above the WGS84 ellipsoid and if the trajectory changes, it has to re-upload the entire code into the memory of the microcontroller, not only the waypoints. (MatrixPilot, 2011)
### Camera
The current camera is the modified version of the FlyCamOne-2 pocket camera (Figure 4.).
It was necessary to replace the built-in Lithium-Polymer accumulator by an external power source to avoid in-flight shutdowns. The FlyCamone-2 was mounted on the left wing of the airplane to avoid obstacles (e.g. fuselage) in the line-of-sight of the lens (Figure 5). It has several recording mode, e.g. continuous video, endless photo capturing mode with an interval of four seconds. The head of the camera with the lens is steerable manually or with a servo, thus capturing of oblique photographs is very easy in this way. It is possible to buy a small accessory, a small holder for the camera. With this holder the camera is easily mountable on different surfaces, such as the wings or fuselage of a model airplane. On the holder four electrical pins were installed, two pins for charging the camera and two pins for triggering it with a spare channel on the transmitter. The head of the camera with the optics is turnable with the help of a servo, installed to the end of the holder. During the field test the head was fixed in vertical position to take near-vertical photographs or videos.
The technical properties are shown in Table 6.
## 3 Field Test
The first real test flights were carried out near an undiscovered archaeological ruin near Piliscsev (Hungary) in mountain Pilis during a field trip which originally was organised this June for tectonic measurements.
The presence of different nations (e.g. Romans, Gepids and Avars) from different ages is well-known in this area for the Hungarian researchers but the investigated place was undiscovered until now. During the field trip a sharp crop-mark was recognized on a meadow near the town and its surveying became very important because of the agricultural using of the meadow and the possible short lifetime of the crop-mark.
After the rough identification of the crop-mark's coordinates on topographic map sheets, the waypoints and the settings of the flight were performed and uploaded into the memory of the microcontroller. The maximum flying height was set to 80 m above the ground due to the low resolution of the camera. The trajectory of the UAV consisted of four waypoints, distributed on the approximate corners of the crop-mark. The flight was performed in automatic mode, except the take-off and the landing processes.
\begin{table}
\begin{tabular}{|c|c|} \hline
**SPECIFICATIONS** & FlyCamOne-2 \\ \hline Dimensions & 80\(\times\)40\(\times\)18 mm \\ \hline Weight & 37 g \\ \hline Mode & Endless video, Photo, Endless photo, Web camera \\ \hline Resolution & 640\(\times\)480 pixels (video) \\ & 1280\(\times\)1024 pixels (photo) \\ \hline Focal Length & 6.47 mm \\ \hline \end{tabular}
\end{table}
Table 6: Technical specifications of the pocket-camera
Figure 4: The modified FlyCamOne-2 pocket camera
Figure 5: The camera under the left wing
Figure 3: UAV Development Board v3
Photo: Sparkfun Electronics – www.sparkfun.comDue to the poor light conditions which caused blurry photos, it had to change the camera mode to video recording to perform sharp video. The recorded video was transformed from the original 23 frame per second resolution into only 0.5 frame per second resolution with the open-source VirtualDub program. Later the frames of the transformed video was exported into image sequence, each image was saved in JPEG format (limitation of the program) with maximum resolution and minimal compression. The produced, almost 200 JPEG files were imported into the Microsoft Image Composite Editor (ICE), where it is possible to choose among different stitching methods:
\"Rotating Motion: Use this option when you stand in a single position and rotate your camera about a fixed point. This is how you should shoot most panoramic scenes. The other motions are for less common stitching tasks.
Planar Motion 1: This option computes the best overlap between the images, without performing any skewing or perspective distortion (allowing only translation, rotation, and scaling of the images). Planar Motion 1 is useful for stitching together multiple overlapping flat-bed scans of a large document. It can also be useful if you want to achieve a panography effect (although ICE doesn't have all of the blend modes that you might want for panography).
Planar Motion 2: This option is like Planar Motion 1, but allows skew in addition to translation, rotation, and scaling of the images. This setting is probably the least useful, but can be used if Planar Motion 3 gives poor results.
Planar Motion 3: This option computes the best overlap between images, including perspective distortion in addition to translation, rotation, scaling, and skew. This option is particularly useful for stitching images of a large flat surface, for example a white-board or a gallery wall. As long as the object being photographed is flat you don't need to rotate about a fixed point like most panoramic shots and can move to capture different shots of the flat scene.\" (Microsoft Image Composite Editor manual, 2011)
As the description mentions, the best methods to stitch aerial photos are the Planar Motions because these methods try to minimize the perspective distortions on the images during the mosaicing process.
The stitched image was georeferenced with ground control points (GCPs), measured with a handheld Garmin map60 Cx GPS during the field test, in ER Mapper 2010. The error of the GPS measurement was 5 m due to the clear visibility of the GPS satellites while the meadow, where our crop-mark is located, lies near plough-lands on flat terrain. This precision is sufficient for later archaeological surveys while the used methods (e.g. ditches) don't require geodetic precision. The GCPs are located on the corner points of the crop-mark, six points were measured totally.
The Ortho and Geocoding Wizard tool of the ER Mapper was used to georeference the mosaic. After the import of the image, the used projection system (UTM34) and the mathematical method (Linear Polynomial) were parameterized. After the identification and marking of the control points, the georeferenced image with 7 cm spatial resolution was exported in GeoTIFF format. Due to the used method, the non-professional GPS and the production method of the photos, the maximum absolute error of the georeferenced image reaches the 1.5 m. The pattern of the error vectors is irregular, the bigger values can be seen on the central and south-western part of the image and the vectors become shorter towards the north-eastern corner.
The original, georeferenced image shows the crop-mark and the meadow in true-colour but in this case, it is hard to distinguish the plants of the crop-mark and the grass on the meadow. To enhance the visibility of the crop-mark, a false-colour version (Figure 7) was created with the help of the Edit Transform Limits' menu of the ER Mapper 2010 program. The lower (under 75) and higher (above 175) intensity values of the red layer were filtered, the blue and green layers remained in the original state.
Complex archaeological research didn't begin recently but the author hope that the georeferenced mosaic will be a helpful base to the research.
## 4 Conclusions and Future Works
In this paper a brief overview about the open-source UAV hardware and software was presented, as well as the building of a small fixed-wing UAV based on one of these systems and a result of its first field test.
The community based development of the firmware results a fast and reliable development method and facilitates the debugging for users who aren't experts in programming or electronic engineering.
Despite of the non-professional precision of the hardware, the discussed system can stabilize and navigate an airplane along a defined trajectory which is optimized to photogrammetric surveys (e.g. the flight route of an image block). Currently the precision of the navigation depends on the onboard GPS unit, not on the used algorithm, thus it is necessary to do research in this part of the hardware.
The main research topics will focus on the sensors and the orthorectification process in the near future. The results of the used camera are promising from the method point-of-view but the resolution and the limited access to the inner orientation parameters decrease its usability during the data process. The next planned camera will be a light (below the payload capacity of the airplane) compact digital camera with higher resolution and known parameters of the lens. The other main future goal is its mechanical or electronic shooting using the built-in trigger feature of the MatrixPilot firmware.
Other developments are also planned in the data workflow because the used abridging method isn't enough precise and
Figure 7: The false-colour mosaic of the crop-mark Green: plants of the crop-mark; Red: grass and soil
causes high relative and absolute errors compared to the spatial resolution of the photos.
## Acknowledgements
The author thanks [PERSON] for the technical contribution during the development of the UAV, [PERSON] and [PERSON] for their helpful information about the discussed crepmark. Furthermore, the author thanks also the MatrixPilot Team and the DIYDrones.com community for their work and help.
The European Union and the European Social Fund have provided financial support to the project under the grant agreement no. TAMOP 4.2.1./B-09/1/KMR-2010-0003.
## References
* Principles, techniques and geoscience applications_. Elsevier, Amsterdam
* [2][PERSON], [PERSON], [PERSON], [PERSON], 2007. Mapping of archaeological areas using a low-cost UAV the Augusta Bagiennorum test site. In: _Proceedings of the XXI International CIPA Symposium_, XXI International CIPA Symposium, 01-06 October 2007, Athens, Greece.
* [3][PERSON], 2004. A mini Unmanned Aerial Vehicle (UAV): System overview and image acquisition. In: _The International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences_, Vol. XXXVI-5/W1, International Workshop on Processing and Visualization using High Resolution Imagery, 18-20 November 2004, Pitsanulok, Thailand.
* [4][PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2010. UAV Photogrammetry Project [PERSON], Bhutan. SLSA-Jahresbericht 2009, SLSA, Zurich, Switzerland, pp. 61-70.
* [5]DIYDrones: www.diydrones.com (accessed 20 July, 2011)
* [6]MatrixPilot: code.google.com/p/gentlenav/ (accessed 20 July, 2011)
* [7]Microsoft Image Composite Editor manual: research.microsoft.com/en-us/um/redmond/groups/ivm/ice (accessed 25 September, 2011)
* [8]Paparazzi: paparazzi.enac.fr/wiki/Main_Page (accessed 20 July, 2011)
|
isprs
|
AERIAL SURVEYING UAV BASED ON OPEN-SOURCE HARDWARE AND SOFTWARE
|
J. Mészáros
|
https://doi.org/10.5194/isprsarchives-xxxviii-1-c22-155-2011
| 2,012
|
CC-BY
|
isprs/93006f71_af11_44b5_8deb_f1ffd6f89e0c.md
|
RELIEF EFFECT CORRECTION ON LANDSAT IMAGERY FOR FOREST APPLICATIONS USING DIGITAL TERRAIN MODELS [PERSON] - Researcher [PERSON] - Researcher [PERSON] - Researcher Instituto de Pesquisas Espaciais-INPE P.0. Box 515 12201 - Sko Jose dos Campos - SP - Brazil COMISSION III
ABSTRACT
Manytimes forest are planted on mountainous terrain. In this case the remotely sensed radiation at the LANDSAT sensor is highly dependent on the terrain slope, rendering difficult an accurate classification of the multispectral data. With a slopemap derived from a digital terrain model (DTM), and information about the position of the sun, and the satellite sensor, it is possible to correct the image for the relief effect, by means of an illumination model. This work studies quantitatively the effect of this correction on the image classification, for an Eucalyptus ssp. forest near Jambeiro, SP, Brazil.
INTRODUCTION
The basic assumption underlying the use of TM imagery for forest inventory is the terrain flatness. Many times, however the forests are located on mountainous regions. If one has topographical information about the region under study a DTM can be prepared. With a DTM, acquired from geographical charts or aerial photographs, a slopemap can be obtained, with the position of the surface normal, and the slope. This information, combined with the location of the sun, the position of the sensor at the time of the satellite passage, and illumination model, allows one to correct the image for relief effects.
The need for the topographic effect correction has been determined at least since 1974 ([PERSON], 1974). [PERSON], [PERSON] and [PERSON] (1980) found the Lambertian illumination model valid for slopes of less than 25 degrees and effective illumination angles of less than 45 degrees for mid-latitude ponderosa pine forests. It was asserted by [PERSON] and [PERSON] (1980), that one cover type could be associated to a wide range of pixel values, due to variations in slope angle and aspect. [PERSON] and [PERSON] (1981) found the radiometrical correction limited, due to the practical difficulty of exactly determining all reflection properties in a mountainous environment. [PERSON], [PERSON], and [PERSON] (1981) asserted that the Lambertian, and the modified Lambertian models actually increased the topographic effect, while a non- Lambertian model reduced it. Good results were reported by [PERSON], [PERSON], and [PERSON] (1986) by means of an atmospheric correction combined with a Lambertian illumination model, for aforest near Kanazawa city, Japan. [PERSON] (1987) found only aweak relationship between LANDSAT (MSS and TM) response variation, and gentle undulating parameters within cultivated fields and forests, in Sweden. As it is seen, from this incomplete survey, there is no consensus on the role of radiometric correction for the topographic effect, on image classification of forests.
This work adds new data to the problem, since it was tested for aforest located at the tropics (23 degrees South), on undulating terrain, with a thick uniform vegetal cover (Eucalyptus ssp.)
THE TEST AREA
Taking advantage of existing recent aerial photographs, an 1:25.000 geographical chart, and easy access roads (for field triper verification), a reforested region near Jambeiro, Sao Paulo State, Brazil, was selected as test area. A LANDSAT TM image taken on December 9 th, 1985 was used for analysis purposes. This image was taken on Summer on the Southern Hemisphere, with a sunelevation of 56 degrees and azimuth of 97 degrees. The forest size is roughly 30 squares kilometers. The TM channels used were 3, 4 and 5, proper for forest studies. Figure 1 shows a TM image in near real color composition of the region under study. North is up, the forest being seen in green. A different color composition in false color is presented in Figure 2.
CORRECTION PROCEDURE
The idea behind the correction procedure is to obtain a terrain mathematical model, and for each pixel of the original image correct the illumination, getting a correct pixel value.
Starting from the region aerial photographs, a contour map was obtained, on an 1:20.000 scale. From this map, using the INPE developed Digital Terrain Model program ([PERSON], [PERSON], [PERSON], 1987), a region Digital Elevation Model (DEM) was produced. Figure 3 presents a perspective view of this model, that was obtained from nearly 10.000 samples from the contour map, being the observer with an elevation of 30 degrees and an azimuth of 150 degrees. Another program, using the DEM as input, calculates the surface normal. This normal was used on the illumination model program to correct the pixel values. An interpolation was performed in such manner that each image pixel matches the DEM horizontal planergid quadrangles.
Theoriginal LANDSAT TM channels (3, 4 and 5) were then classified by means of a maximum likelihood algorithm. In a mountainous region it is expected that even a region with an uniform vegetal cover will present different classes, yielding a difficult automatic classification, due to the relief effect. For the present case a field trip to the test area confirmed the site vegetal cover uniformity.
As far as the illumination model, the Lambertian was chosen,!in spite of its intrinsic limitations (for near grazing angles), and the problems pointed out by [PERSON], [PERSON], and [PERSON] (1981), like overcorrection. However, since the system used was microcomputer based, and one of the goals was to reduce the computational complexity, it was decided to check first the potentialities of this illumination model. Of course, it is planned, in the near future, to add more sophisticated illumination schemes as [PERSON] (1941), [PERSON] (1971), [PERSON] (1975), and [PERSON] (1987). In addition [PERSON], [PERSON], and [PERSON] (1986), and [PERSON], [PERSON] and [PERSON] (1980), reported good results with a Lambertian model.
The LANDSAT scener radiance, for a given wavelength \(\lambda\), and incidence and exitance angles i ande, is according to [PERSON], [PERSON], and [PERSON] (1980):
L(\(\lambda\), e, i) = \(\rho(\lambda\), e, i) cos i = C(\(\lambda\)) E\({}_{0}(\lambda)\) T(\(\lambda\))
where \(\pi\) is the constant 3.1415925 , C(\(\lambda\)) is the LANDSAT scanner calibration factor, E\({}_{0}(\lambda)\) is the solar irradiance, T(\(\lambda\)) is the atmospheric transmittance, \(\rho(\lambda\),e,i) is the surface (target) reflectance.
For a Lambertian surface \(\rho(\lambda\),e,i) = \(\rho(\lambda)\), i.e, the reflectance is independent of the incidence and exitance angles. Since C, E and \(\pi\) are known constants, and T can be considered approximately constant for a limited bandwidth centered around a given \(\lambda\), then
L(\(\lambda\)) = \(\rho(\lambda)\) cos i
or
L(\(\lambda\)) = L\({}_{n}(\lambda)\)cos i
The incidence and exitance angles can be computed from the sceneg geometry: cos i = cos \(\theta_{S}\)cos \(\theta_{n}\) + sin \(\theta_{S}\) sin \(\theta_{n}\) cos(\(\phi_{S}\) - \(\phi_{n}\)) and cos e = cos \(\theta_{n}\) (for a nadir pointing sensors)
where \(\theta_{S}\) is the solar zenith angle, \(\theta_{n}\) is the surface normal zenith angle, \(\phi_{S}\) is solar azimuth angle, and \(\phi_{n}\) is the surface azimuth, or aspectangle.
For a Lambertian model to be used it is only needed to know the terrain normal, and the solar position. For more sophisticated models it is also necessary to compute the Fig. 1 - Thematic Mapper (TM) near real color composition image of the test area. North is up.
Fig. 3 - Digital Elevation Model of the test - area. Observer position: elevation 30 degrees, azimuth 150 degrees. Vertical scale exaggerated for clarity.
Fig. 4 - False color TM image with a cursor indicating the region where the correction was performed and the sample points for the maximum likelihood classification (3 classes, 4 samples for each class).
position of thesensor with respect to the terrain normal (exitanceangle). In this study the satellite images were acquired under the supposition of nadir looking sensors, in spite of the fact that for illumination correction the exitanceangle was not used.
Another distortion produced by the topography is the geometric distortion, described by [PERSON], [PERSON], and [PERSON] (1982), and expressed as:
Ag = h tano
where # is thesensor looking angle, h is theelevation above the ground reference, and Ag is the ground rangerror.
For the present test area # is smaller than 4 degrees and h<1000 m, rendering the ground range error to the subpixel level.
RESULTS AND CONCLUSIONS
In order to assess the relief effect correction on the image classification, the maximum likelihood classifications were compared before and after the correction. Figure 4 shows the sample positions used on both classifications, as well as the interest region adopted (inside the cursor area). The classification on the original scene identified three classes, mainly on the forest limits, as seen from Figure 5. It was painted blue thedarkest class (class 1), pink the medium shaded class (class 2), and white the clearest class (class 3), on the reforested area, seen in red in Figure 2.
The corrected image is presented channel by channel on Figure 6, and as a color composition on Figure 7. Note the difference between this last Figure, and Figure 1. The image degradation is supposed to be due to the linear interpolation on the DEM, and the Lambertian illumination model.
TABLE 1
AREAS FOR EACH CLASS
CLASSES ORIGINAL CORRECTED
1 (BLUE) 3.953 km 4.568 km
2 (PINK) 8.397 km 7.252 km
3 (WHITE) 4.548 km 19.349 kmFig. 5 - Original image classified with a maximum likelihood algorithm with the sample locations shown on Figure 4. Three classes, 4 samples each.
Fig. 6 - The 3 channels of the scene corrected for the relief effect.
Fig. 17 - Corrected image. Color composition.
Acomparison of Figures 1 and 7 shows that the forest is more homogeneous on the original image. Also there is an striking difference between these images. However, when a comparison is made between the classified images, the corrected image is not visually very different from the original one. The forest contours are nearly the same. If a better classification was expected, an indication would be a predominance of a class (plane area) over the other (sloped areas). According to Table 1 this has occured, but these preliminary results have to be viewed with care. The corrected image classification has picked a larger area of Eucalyptus ssp., some of it outside the forest. Those were Eucalyptus ssp. planted on flat terrain, confirmed by a field trip. Thereis salmon no flat terrain inside the forest. Class 3 has grown from 4.548 sq.km to 19.349 sp.km. Class 2 has shrinked a little, while Class 1 has grown sligtly.
Further work remains to bedone, as theinclusion of better illumination models, the use of any jet more accurate DEM, the use of better interpolators for the horizontal grid (nearest neighbours were used), and for refining it on the DEM, the determination of more accurate reflection parammeters for the species under study. This work shows that using a simple model, on a microcomputer based system, the classification can be improved. For future work it is recommended the use of Summer and Winter images and a better selection of samples for the classification algorithms.
ACKNOWLEDGEMENTS
The authors wish to express their thanks to the Institute for Space Research - INPE (Brazil), and the Brazilian Institute for Forestry Development - IBDF (Brazil), whofinanced the project FLOVAL. Also thanks are due to the many individuals who helped us in many ways, specially [PERSON], [PERSON] and [PERSON].
REFERENCES [PERSON] \"Illumination for Computer Generated Pictures\". Communications of the ACM, 18 (6): pp. 311-317, June 1975.
[PERSON]; [PERSON]; [PERSON] \"Geracao automatica de mapas de isolinhas utilizando microcomputador\". XX Congresso Nacional de Informatica, SUCESU, 5 ao Paulo, SP, 31/8 a 06/9/87. (publicado nos Anais pp. 425-430).
[PERSON] \"[PERSON] Shading of Curved Surfaces\". IEEE Transactions on Computers, C-20 (6): pp. 623-628, June 1971.
[PERSON] \"The Topographic Effect on [PERSON] Data in Gently Undulating Terrain in Southern Sueden\". Int. J. Remote Sensing, 8 (2): pp. 157-168, 1987.
* [PERSON] B.N.: [PERSON]. \"The Topographic Effect on Spectral Response from Nadir-Pointing Sensors\". Photogrammetric Engineering and Remote Sensing 46 (9): pp. 1191-1200, Sept. 1980.
* [PERSON] \"Natural Resources Mapping in Mountains Terrain by Computer Analysis of ERTS-1 Satellite Data\". LARS, Purdue University, West Lafayette, Indiana, pg. 124, 1974.
* [PERSON]; [PERSON] \"Correcting for Anisotropic Reflectances in Remotely Sensed Images from Mountains Terrains\". 1981 Machine Processing of Remotely Sensed Data Symposium, West Lafayette, Indiana, 1981.
* [PERSON], [PERSON]: [PERSON]; [PERSON] [PERSON] \"Application of Digital Terrain Data to Quantify and Reduce the Topographic Effect on LANDSAT Data\". Int. J. Remote Sensing, 2 (3): pp. 213-230, 1981[PERSON]
* [PERSON]; [PERSON]; [PERSON] \"Radiometric Correction Method which Removes both Atmospheric and Topographic Effects from the [PERSON] MSS Data\". Proceedings of the 1986 [PERSON], Zurich, Switzerland, 8-11 Sept, 1986[PERSON]
* [PERSON] \"The Reciprocity Principle in Lunar Photommetry\". Astrophys. J., 93, pp. 403-410, 1941.
* [PERSON]; [PERSON]; [PERSON] [PERSON] \"The Lambertian Assumption and LANDSAT Data\". Photogrammetry Engineering and Remote Sensing, 46 (9): pp. 1183-1189, Sept. 1980.
* [PERSON] [PERSON]; [PERSON] [PERSON] \"Theory for off-specular reflection from roughed surfaces\". J. Optics Soc. of America, 57, pp. 1105-1114, Sept. 1967.
|
isprs
|
Introduction
|
Elena Faur, Ciprian Speranza
|
https://doi.org/10.55245/energeia.2025.01
| 2,025
|
CC-BY
|
isprs/9d0279ce_6704_47a3_a147_585e2049634e.md
|
Comparison of Characteristics of Bim Visualization and Interactive Application Based on WebGL and Game Engine
[PERSON]
1 Carleton University - (yuzheng4, ArkounMerchant, zixunxiang, [PERSON])@cmail.carleton.ca
[PERSON]
1 Carleton University - (yuzheng4, ArkounMerchant, zixunxiang, [PERSON])@cmail.carleton.ca
[PERSON]
2 Carleton Immersive Media Studio, Carleton University, 1125 Colonel By Drive Ottawa, Canada - (jlanings, narellano, hormaniuk, sfanj)@cims.carleton.ca
[PERSON]
1 Carleton University - (yuzheng4, ArkounMerchant, zixunxiang, [PERSON])@cmail.carleton.ca
[PERSON]
3 Beijing Huachuang Tonsing Tech Co., Ltd. - [EMAIL_ADDRESS]
[PERSON]
2 Carleton Immersive Media Studio, Carleton University, 1125 Colonel By Drive Ottawa, Canada - (jlanings, narellano, hormaniuk, sfanj)@cims.carleton.ca
[PERSON]
2 Carleton Immersive Media Studio, Carleton University, 1125 Colonel By Drive Ottawa, Canada - (jlanings, narellano, hormaniuk, sfanj)@cims.carleton.ca
[PERSON]
3 Beijing Huachuang Tonsing Tech Co., Ltd. - [EMAIL_ADDRESS]
###### Abstract
How can we make the building information model accessible to all stakeholders on a project? An efficient approach is to access the building information model is to use the software that created the model. However, not all stakeholders will be able to use this highly specialized software--due to lack of training and expensive licences--even if some software specially developed a simplified version of the viewer to browse the model, however, it still failed to provide convenient access to these models for participants from a wide range of backgrounds. The current development of BIM model visualization and interactive applications is mainly based on two technologies: WebGL and game engines. What is the general workflow of WebGL and Game Engines supporting application development? What are their characteristics respectively? What conditions are restricted? There are no relevant academic papers to discuss and compare these two types of platforms. Therefore, this is the content of this essay. By comparing the workflow and characteristics of BIM visualization and interactive application development based on WebGL and Game Engine, it can provide a reference for heritage building managers when planning the development of relevant application tools and meet the participation needs of different stakeholders.
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLVIII-M-2-2023
29 th CIPA Symposium \"Documenting, Understanding, Presenting Cultural Heritage:
Hummities and Digital Technologies for Shaping the Future\", 25-30 June 2023, Florence, Italy
## 1 Interactive Application of Building Information Model Helps Stakeholders
participate in heritage conservation
[PERSON] proposes \"The Participatory Turn That Never Was\" in the field of architectural design in the fourth chapter of his The Second Digital Turn: \"the customer, client, or any other stakeholder would be called on to intervene and participate in the design process, to customize,' or co-design, the end product within the framework and limits set by the designers or administrators of the system.\" He considered that this is the turn brought by the rise of the participation spirit of web2.0 from the new millennium ([PERSON], 2017). It also hints to how could heritage building conservation practice embraces the digital turn. Conservation process is the informed decision-making process, this means involving many stakeholders (people, institutions, and government agencies) in the process and working with them to understand why a particular place is important and to identify what physical evidence needs to be preserved ([PERSON], [PERSON], and [PERSON], 2007). Semantically rich three-dimensional (3D) models such as building information modelling (BIM) are increasingly used in digital heritage. They provide the required information to varying stakeholders during the different stages of the historic buildings lifecycle which is crucial in the conservation process ([PERSON], [PERSON], and [PERSON], 2017).
How can we make the building information model accessible to all stakeholders on a project? An efficient approach is to access the building information model is to use the software that created the model. However, not all stakeholders will be able to use this highly specialized software--due to lack of training and expensive licences--even if some software specially developed a simplified version a simplified version of the Viewer1 to browse the model, though, it still failed to provide convenient access to these models for participants from a wide range of backgrounds.
Footnote 1: Such as Autodesk Viewer. It is a free browser application that lets you upload, view, and share designs for products such as [[https://viewer.autodesk.com/designviews](https://viewer.autodesk.com/designviews)]([https://viewer.autodesk.com/designviews](https://viewer.autodesk.com/designviews))
Many studies have focused on this issue and made corresponding exploration. For example, [PERSON] et al. (2019) proposed that one of the modern methods adopted to achieve this goal is to use structured web-platform. With the help of BIM3 DSG platform, they established a set of reality-based 3D models of the chapels linked to a Data Base (DB) of information for the world cultural heritage site \"Sacri Monti of Piedmont and Lombardy\". Used to help to update a section of the next UNESCOs Periodic Reporting (2018-2024) and will contribute to shaping \"standard\" best practices to monitor and safeguard Sacri Monti complex and all the similar case studies. [PERSON] et al. (2019) used an interactive web-presentation portal of high-resolution 3D models enriched by historical and Archival set of content, from the digitization procedure applied to collection objects, to the digitization process of related data and information. From 2014 to 2017, Beijing Gouowenyan Information Technology Co., Ltd. once entrusted Beijing Huachuang Tonsing Tech Co., Ltd. to complete the digital assets display system for the colour painted states and murals of Shuanglin Temple and realized the visualization of dense point clouds and photogrammetry mesh models through WebGL (**Figure 1**). The above three cases are all BIM model visualization tools and interactive applications developed directly on the browser based on the WebGL.
However, the more common practical application in dealing with 3D models and making them available to a wider range of people is 3D games. Therefore, the use of game engines as a solution is one of the research options. For example, [PERSON] et al. (2019) identified six heritage spaces of the Centre Block of the Canadian Parliament which had been previously documented and modelled by the Carleton immersive media Studio2 (CIMS) were prepared for Unity 3D 3, Enabling their later use in a storytelling experience in VR. [PERSON] et al. (2021) introduced a binational research team from Mexico and the U.S. made up of historians, architects and animation engineers created the historical settings, characters, and interactivity necessary to offer the public this immersive experience which revisualized a historical event that occurred on March 11, 1554, in the city today known as Antigua, Guatem. They imported the 3D model created by Autdesk 3 DMAX and AutoCAD and the photogrammetry model generated by Metashape into Unreal to reproduce the historical events that happened at that time. [PERSON] et al. (2021) also described a realistic representation of Asionu Church in a 3D video game environment based on Unity. They merged the H-BIM categories such as doors, windows, roofs and floor to Rhinoceros 3D, creating a complete 3D model of the monument. Then, the model is exported to Autodesk FBX Technology format and imported to Unity.
Footnote 2: [[https://cims.carleton.ca/#home](https://cims.carleton.ca/#home)]([https://cims.carleton.ca/#home](https://cims.carleton.ca/#home))
Footnote 3: Unity is a cross-platform game engine developed by Unity Technologies.
From the above cases, we can see that the current development of BIM model visualization and interactive applications can be based on two platforms: WebGL and Game Engines. While these case studies amply illustrate how their respective 3D rendering tools can meet the needs of a wider range of stakeholders in heritage conservation to participate in decision-making. But what is the general workflow of WebGL and Game Engines supporting application development? What are their characteristics respectively? What conditions are restricted? There are no relevant academic papers to discuss and compare these two types of platforms. In the global geomatics industry, GIM International cities/uttm source=newsletter&uttm medium=email&uttm_ca mipaign=3D+Modelling+%26+Visualization+Weeks
Footnote 4: [[https://www.gim-international.com/content/article/emerging-web-and-game-engine-tech-for-3d-twin](https://www.gim-international.com/content/article/emerging-web-and-game-engine-tech-for-3d-twin)]([https://www.gim-international.com/content/article/emerging-web-and-game-engine-tech-for-3d-twin](https://www.gim-international.com/content/article/emerging-web-and-game-engine-tech-for-3d-twin))
[[https://doi.org/10.5194/sprs-archives-XLVIII-M-2-2023-1671-2023](https://doi.org/10.5194/sprs-archives-XLVIII-M-2-2023-1671-2023)]([https://doi.org/10.5194/sprs-archives-XLVIII-M-2-2023-1671-2023](https://doi.org/10.5194/sprs-archives-XLVIII-M-2-2023-1671-2023)) | Author(s) 2023. CC BY 4.0 License.
###### Abstract
We present a novel approach to the development of BIM model visualization in the context of the autonomous
prepossessed as gITF7, which minimizes the size of 3D assets, and the runtime processing needed to unpack and use them. Non-geometric data, that is, attribute data, are exported as JavaScript Object Notation (JSON) files. JSON is a lightweight data-interchange format. Element IDs link the two types of data. Then 3D geometric data is further developed with Three.js8 and rendered in real-time according to customer needs through WebGL. The attribute data is also called according to the need to match the element ID. In the CDC Digital Twin project, geographic information data comes from map renderers such as Mapbox GL JS and Open Street Map, integrated into the application by JavaScript programming and finally output to users through the Web browser (Figure 3).
Footnote 7: gITF# is a royalty-free specification for the efficient transmission and loading of 3D scenes and models by engines and applications.
In the CDC Digital Twin project, the IFC.js library has played a huge role. IFC.js is the first JavaScript library fully dedicated to parse IFC files so they can be displayed and manipulated in any web browser. They were released in December 2020. Their mission is to provide AEC professionals with easy and free methods to build their own BIM tools. IFC.js provides a viewer with examples of how to create your own BIM application: Scene navigation, material changes, element selection by clicking on them, section plans, etc.
### CDC INCA project workflow based on Unreal Engine
The CDC-INCA project, namely CDC: Interactive \"Digital\" Campus Application, is another attempt by CIMS Lab to develop a visual and interactive application of the BIM model of campus buildings in Revit format on the basis of the DCI project. This project focuses on making the building information model accessible to all stakeholders on a project. The solution is a Streamable Assets Viewer (**Figure 4**). The Viewer is Built in Unreal Engine, Accessible in a browser via pixel streaming. With Pixel Streaming, we run a packaged Unreal Engine application on a local server or a server in the cloud, along with a small stack of web services that are included with the Unreal Engine. The final result of the project is very similar to video games. In a virtual 3D space scene with rich content, we can interact with the model through the UI, load the campus buildings in levels, look at the building in the section box, annotate by location, view MetaData etc.
**Figure 5** is a diagram illustration of the workflow of the project. Starting from the Revit model, through model reorganization, each building is exported as Unreal DataSmith files based on geographic location information. The index address relationship of the exported data files is established through the PostgreSQL relational database. The UI design was also carried out and the menu functions in the application were implemented using Unreal Widget Blueprint. The project leader established various settings of the Unreal Project, loaded the data in the CIMS server, integrated it into an application of BIM Viewer, and finally streamed it to the Web client through Pixel Streaming and remotely controlled the server interaction through menu options. Or achieve the same function through a mobile device.
Figure 3: WebGL based interactive application workflow - BIM / GIS
Figure 2: The CDC digital twin browser
### Comparison of two Approaches
#### 2.3.1 Rendering features, or visualization capabilities:
Many of the most advanced visualization techniques are driven by and applied in the video games industry. Hence, UE can provide AAA-level game visual effects, but it is not easy to achieve through WebGL. This is actually because WebGL is a lower-level general technology and standard than Game Engines, and frameworks like Three.js, Babylon,js and Playcanvas are needed on top of it to help developers achieve complex rendering effects, while a Game Engine such as Unreal is a software product, that is, a more mature commercial development has been done on the basis of the underlying rendering source code, so that users can more easily obtain better rendering effects. Therefore, when people with the same professional ability use these two approaches to develop applications, the Game Engine is significantly better than WebGL in terms of visual experience.
For the performance experience, WebGL and UE have their own advantages and disadvantages. WebGL has some delays when loading larger models, while UE is faster. For example, the overall campus bird-eye view model of CDC Digital Twin is opened on a laptop whose GPU is NVIDIA GeForce GTX 1050. The test shows that the average time to load the model is two minutes (**Figure 2**), while the same model only takes a few seconds when opened on the CIMS Lab server through Pixel Streaming. Since WebGL applications need to transfer 3D models to the user's local web browser and render them in real-time through the GPU of the user terminal, the model loading time is affected by the scale of the model data, the GPU performance of the local device, and the network transmission speed. The Game Engine directly uses the server for loading and rendering. Generally, the performance of the server is much higher than that of the clients' local terminal. The professional rendering capability design of the Game Engine also speeds up model visualization. Nevertheless, to make good use of the limited server resources, a Low LoD Model Mass is still established for the complex building model in the CDC INCA project, which saves computing resources when loading the bird-eye view model of the entire campus. By building different levels of detail (LoD) models, loading the most economical model for visualization according to the needs of the scene is a commonly used method to improve the performance experience in both approaches.
#### 2.3.2 Data integration and management capability:
Data integration and management capability, WebGL needs separate the 3D model and attribute data, the data can be expanded and modified, and the data integration is flexible. But in the UE process, the attribute data is exported together with the model file, the ability to integrate and manage attribute information is weak. Data integration is at the core of a meaningful application and can be seen as a key enabler for building data analysis and the creation of digital twins. By generating a semantic information model of a building complex, integrating attribute information (e.g., material, performance, system function data) or other additional information (e.g., age, value, historical maintenance, etc.), the BIM Viewer can be more than just a visualization of a built environment. Furthermore, an increasing amount of data is being collected using real-time sensors related to traffic, weather and building HVAC systems, for example. BIM Viewer can act as a platform for integrating and visualizing these data sources.
#### 2.3.3 Interoperability and flexibility of use:
Interoperability and Flexibility of Use, both approaches do not need to install software when using, and the development software can basically be regarded as open source too. WebGL capable of serving large-scale global users. In the contrast, UE Online users are limited by server capacity, and the ability to support a large number of public services is weak. WebGL could be completely open source, its \"openness\" plays an important role in the flexibility of use. However, compared with commercial software, there may be problems such as slow update iteration speed, componentiation, and low standardization. While UE is professionally maintained by commercial companies, and open
Figure 4: Streamable assets viewer demo of CDC-INCA
Figure 5: Unreal based interactive application workflow of CDC-INCAsource to a certain extent, with a stable development roadmap.
#### 2.3.4 Support for various 3D assets:
In the BIM visualization and interactive application, not only the parametric building information model, but also many types of 3D assets need to be visualized and interactively operated in the application. These 3D assets include dense point clouds acquired by 3D laser scanners (e.g., LAS, E57), large textured mesh models obtained from photogrammetry and other professional 3D modelling software (e.g., OBJ, FBX, gIFT), different formats of building information models (e.g., IFC), etc. Furthermore, support for flexible data formats that are suitable for several asset types, such as 3D tiles, is becoming increasingly important.
WebGL fully supports different types of 3D assets. Beijing Guowenyan Information Technology Co., Ltd. once entrusted Beijing Huuchuang Tonsing Tech Co., Ltd. to complete the digital assets display system for the colour painted states and murals of Shanghai Temple (**Figure 1**) and realized the visualization of dense point clouds and photogrammetry mesh models through WebGL. Rendering of mesh models and point cloud models is different from BIM models. Meshes and point clouds are discrete, and the data volume of the model is particularly large, so a network rendering method similar to a map or satellite image, which is graded and divided into blocks and loaded in real time. A typical approach is to cut into blocks according to spatial relationships, such as octrees, which have been graded according to the density of the data. High-level LoD blocks perform data thinning. In this way, when the application of the client browser is loaded and rendered, the block and block level to be loaded can be calculated in real time according to the position of the camera during interaction, and then downloaded and rendered. Unreal Engine has a rich plug-in ecosystem and an active application market. Driven by commercial development, various plug-ins can also easily support various types of 3D assets such as dense point clouds and photogrammetric mesh models too.
#### 2.3.5 Network, hardware, and software support requirements:
WebGL needs to load the model and application
\begin{tabular}{|p{56.9 pt}|p{142.3 pt}|p{142.3 pt}|} \hline Characteristics & WebGL & UE/UE Cloud \\ \hline Visual Experience & Developers are required to have certain aesthetic skills to design visual effects. & Professional game rendering engine capable of providing AAA-level game visual effects. \\ \hline Performance Experience & Fast, lightweight, no need to compile, simple 3D rendering, performance is better. & It is suitable for large and medium-sized scenarios, supports advanced interaction, and its performance is mainly related to network bandwidth. \\ \cline{2-3} & Open up the browser and graphics card GPU channel through OpenGL. & UE5 supports breakthrough technologies such as Nanite and Lumen and can directly generate SPIR-V without GLSL. \\ \cline{2-3} & Supports separate rendering of Canvas and DOM, which is helpful for clear logical structure and small performance improvement. & Based on pixel streaming, client operations are transmitted in real time, generally not layered rendering on the client side. \\ \cline{2-3} & There may be a long delay in scene loading time depending on the model and data volume loaded locally. & Scene load times are stable. Determined by the resolution of the loaded pixel stream and the response speed of the server, it is generally faster. \\ \hline Data Integration and Management & The 3D model and attribute data are separated and then matched, the data can be expanded and modified, and the data integration is flexible. & The attribute data is exported together with the model file, the ability to integrate and manage attribute information is weak. \\ Capability & There are few scripting language encapsulation libraries and tool chains, and there is a certain component ecology. & Overall solution, complete tool chain, huge component group. \\ \cline{2-3} & GLSL belongs to the embedded syntax of
## 3 Likely Future Developments of the Two Approaches
The most effective planning and design approach is an integrated one that combines heritage conservation with other planning and project goals and engages all partners and stakeholders early in the process and throughout (Parks Canada, 2010). The conservation of heritage buildings is inseparable from the participation of stakeholders. As building information models are more and more widely used in heritage conservation, to help the widest possible stakeholders participate in the heritage decision-making process, it is necessary to allow users to access and use BIM models more easily. Whether an interactive application based on WebGL or a Game Engine, helps to achieve this goal depends on several factors: 1. The characteristics of the heritage building itself, 2. The professional and technical background of the stakeholders, 3. The progress of the heritage affairs demands, 4. feasibility and cost of technical implementation. This essay mainly discusses the fourth factor. By comparing the workflow and technical characteristics of the CIMS Lab research projects, it summarizes the advantages and disadvantages of WebGL. and Game Engine development approaches that we have experienced in person during the development process.
Some of these advantages and disadvantages are caused by the limitations of the current development of computer technology, such as: computing speed, network bandwidth, data volume, etc.; some are determined by the basic characteristics of different workflows, such as the flexibility of WebGL in non-geometric data processing when separate model and attribute data, and the data security safeguard brought by Pixel Streaming; while some are related to the professional and technical requirements of the participates, for example, WebGL requires developers to be familiar with computer languages such as JavaScript, while game engines often only require basic programming concepts to implement application functions through a visual development environment and node-based development tools. With the continuous development of emerging technologies in the future, it can be expected that some basic bottlenecks will gradually become less of a problem. As [PERSON] (2017) pointed out, data-compression technologies we don't need anymore, no longer need algorithm simplification, and search don't sort. The unbearable delay caused by computing speed and network bandwidth may be unbelievable in the future 9. However, characteristics rooted in structural reasons will persist unless the working structure is changed. Issues related to people are much more complex, and it is difficult to predict how people will work and choose their expertise in the future.
Footnote 9: [PERSON], [PERSON]. 2017. The Second Digital Turn: Design beyond Intelligence. Writing Architecture. Cambridge, Massachusetts: The MIT Press. P19
However, through this comparison, we can see that WebGL and Game Engines are not on an equal level of comparison as 3D assets sharing solutions. WebGL is a lower-level general technology and standard than Game Engines, while Game Engines such as Unreal is a software product. Based on WebGL, it may be further developed into another Game Engine, and the application developed by the Game Engine may also be released as a tool based on WebGL to be used directly on the browser. That said, in the future, it is likely that multiple technologies will be mixed in an interactive application, rather than a single technical workflow. More importantly, the feasibility and cost of technical realization are only one factor when we consider the 3D interactive application of heritage buildings. The actual work also needs to take into account the characteristics of the heritage building itself, the professional and technical background of the stakeholders, and the progress of the heritage affairs demands too.
\begin{table}
\begin{tabular}{|p{142.3 pt}|p{142.3 pt}|p{142.3 pt}|} \hline Interoperability and Flexibility of Use & No need to install software (just a modern browser), no plug-ins, directly embedded in most mainstream browsers, high flexibility. & Using pixel streaming, no software installation is required, and browser support ability is yet to be evaluated. \\ \cline{2-3} & Capable of serving large-scale global users. & Online users are limited by server capacity, and the ability to support a large number of public services is weak. \\ \cline{2-3} & Completely open source, its “openness” plays an important role in the flexibility of use. & Professional maintenance by commercial companies, while open source to a certain extent, with a stable development roadmap. \\ \hline Support for Various 3D Assets & Practical cases confirm full support for different 3D asset types including dense, coloured point clouds (e.g. LAS, E57), large textured mesh models (e.g. OBJ, FBX, g1 TF), or building information models (e.g. IFC) & Based on the rich plugin ecology and active application market, it can support various types of 3D assets. \\ \hline Network, Hardware and Software & Model and application data need to be loaded to the client, which requires high network bandwidth, but the network requirements can be reduced after the data is properly subdivided. & The requirement for network bandwidth is very stable, which is determined according to the resolution of the output video streaming. \\ \cline{2-3} Support Requirements & Server requirements are low, mainstream distributed service architecture is adopted, and rendering relies on client GPU. & The demand for cloud rendering services is high, and each server serves 1-3 cloud rendering clients. \\ \cline{2-3} & Free software. & Software is free for public welfare projects, free for game revenue below $1 million, and 5\% for the more complex, and it is difficult to predict how people will work and choose their expertise in the future. \\ \hline Data Security & Data security on multiple aspects such as browsers, shaders, and renderers face challenges. & Based on pixel streaming, there is no security risk for data. \\ \hline \end{tabular}
\end{table}
Table 1: Comparison summary table
## Acknowledgements
This work was supported by CIMS, a Carleton University research centre. The work has also been supported by the CDC project: Interactive \"Digital\" Campus Application\" (CDC-INCA) and New Paradigms/New Tools for Heritage Conservation in Canada. This project was funded in part by the Social Sciences and Humanities Research Council (SSHRC) of Canada.
## References
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON], 2019: Towards an advanced conservation strategy: a structured database for sharing 3d documentation between expert users. In _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, XLII-2-W15.9-16. Copernicus GmbH. [[https://doi.org/10.5194/isprs-archives-XLII-2-W15-9-2019](https://doi.org/10.5194/isprs-archives-XLII-2-W15-9-2019)]([https://doi.org/10.5194/isprs-archives-XLII-2-W15-9-2019](https://doi.org/10.5194/isprs-archives-XLII-2-W15-9-2019)).
* [PERSON] and [PERSON] (2019) [PERSON], and [PERSON], 2019: Virtual reconstruction in bin technology and digital inventories of heritage. In _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, XLII-2-W15:25-31. Copernicus GmbH. [[https://doi.org/10.5194/isprs-archives-XLII-2-W15-25-2019](https://doi.org/10.5194/isprs-archives-XLII-2-W15-25-2019)]([https://doi.org/10.5194/isprs-archives-XLII-2-W15-25-2019](https://doi.org/10.5194/isprs-archives-XLII-2-W15-25-2019)).
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON], 2019: Rock art recording in khatm al melaha (united arab emirates): multirange data scanning and web mapping technologies. In _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, XLII-2-W15:85-92. Copernicus GmbH. [[https://doi.org/10.5194/isprs-archives-XLII-2-W15-85-2019](https://doi.org/10.5194/isprs-archives-XLII-2-W15-85-2019)]([https://doi.org/10.5194/isprs-archives-XLII-2-W15-85-2019](https://doi.org/10.5194/isprs-archives-XLII-2-W15-85-2019)).
* [PERSON] (2017) [PERSON]. 2017: _The Second Digital Turn: Design beyond Intelligence_. Writing Architecture. Cambridge, Massachusetts: The MIT Press.
* [PERSON] et al. (2019) [PERSON], [PERSON], and [PERSON], 2019: Documenting historical research for a collection information modelling. A proposal for a digital asset management system. In _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, XLII-2-W15:519-25. Copernicus GmbH. [[https://doi.org/10.5194/isprs-archives-XLII-2-W15-519-2019](https://doi.org/10.5194/isprs-archives-XLII-2-W15-519-2019)]([https://doi.org/10.5194/isprs-archives-XLII-2-W15-519-2019](https://doi.org/10.5194/isprs-archives-XLII-2-W15-519-2019)).
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON], 2019: Automated Multi-Sensor 3D Reconstruction for the Web. _ISPRS International Journal of Geo-Information_ 8 (5): 221. [[https://doi.org/10.3390/ijg8050221](https://doi.org/10.3390/ijg8050221)]([https://doi.org/10.3390/ijg8050221](https://doi.org/10.3390/ijg8050221)).
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON], 2018: Characterizing 3D City Modeling Projects: Towards a Harmonized Interoperable System. _ISPRS International Journal of Geo-Information_ 7 (2): 55. [[https://doi.org/10.3390/ijg7020055](https://doi.org/10.3390/ijg7020055)]([https://doi.org/10.3390/ijg7020055](https://doi.org/10.3390/ijg7020055)).
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON], 2021: Possibilities of spatial correlation of 3d models in an archaeological augmented reality application. In _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, XLVI-M-1-2021:355-59. Copernicus GmbH. [[https://doi.org/10.5194/isprs-archives-XLVI-M-1-2021-355-2021](https://doi.org/10.5194/isprs-archives-XLVI-M-1-2021-355-2021)]([https://doi.org/10.5194/isprs-archives-XLVI-M-1-2021-355-2021](https://doi.org/10.5194/isprs-archives-XLVI-M-1-2021-355-2021)).
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON], 2021: Documentation of structural damage material decay phenomena in h-bin systems. In _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, XLVI-M-1-2021:375-82. Copernicus GmbH. [[https://doi.org/10.5194/isprs-archives-XLVI-M-1-2021-375-2021](https://doi.org/10.5194/isprs-archives-XLVI-M-1-2021-375-2021)]([https://doi.org/10.5194/isprs-archives-XLVI-M-1-2021-375-2021](https://doi.org/10.5194/isprs-archives-XLVI-M-1-2021-375-2021)).
* [PERSON] et al. (2007) [PERSON], [PERSON], [PERSON], and [PERSON], 2007: _Recording, Documentation, and Information Management for the Conservation of Heritage Places: Guiding Principles_. Los Angeles: Getty Conservation Institute.
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON], 2019: New realities for canada's parliament: a workflow for preparing heritage bim for game engines and virtual reality. In _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, XLII-2-W15:45-52. Copernicus GmbH. [[https://doi.org/10.5194/isprs-archives-XLII-2-W15-945-2019](https://doi.org/10.5194/isprs-archives-XLII-2-W15-945-2019)]([https://doi.org/10.5194/isprs-archives-XLII-2-W15-945-2019](https://doi.org/10.5194/isprs-archives-XLII-2-W15-945-2019)).
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON], and [PERSON], 2016: Sharing high-resolution models and information on web: the web module of bin3 dgg system. _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_ XLI-BS (June): 703-10. [[https://doi.org/10.5194/isprs-archives-XLI-BS-703-2016](https://doi.org/10.5194/isprs-archives-XLI-BS-703-2016)]([https://doi.org/10.5194/isprs-archives-XLI-BS-703-2016](https://doi.org/10.5194/isprs-archives-XLI-BS-703-2016)).
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], and [PERSON], 2021: The great \"auto de fe\" at santago de los cabaleros, or how to achieve historical empathy with cultural heritage through virtual reality. In _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, XLVI-M-1-2021:633-40. Copernicus GmbH. [[https://doi.org/10.5194/isprs-archives-XLVI-M-1-2021-633-2021](https://doi.org/10.5194/isprs-archives-XLVI-M-1-2021-633-2021)]([https://doi.org/10.5194/isprs-archives-XLVI-M-1-2021-633-2021](https://doi.org/10.5194/isprs-archives-XLVI-M-1-2021-633-2021)).
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON], 2020: Interactive Dense Point Clouds in a Game Engine. _ISPRS Journal of Photogrammetry and Remote Sensing_ 163 (May): 375-89. [[https://doi.org/10.1016/j.isprsjprs.2020.03.007](https://doi.org/10.1016/j.isprsjprs.2020.03.007)]([https://doi.org/10.1016/j.isprsjprs.2020.03.007](https://doi.org/10.1016/j.isprsjprs.2020.03.007)).
|
isprs
|
COMPARISON OF CHARACTERISTICS OF BIM VISUALIZATION AND INTERACTIVE APPLICATION BASED ON WEBGL AND GAME ENGINE
|
Y. Zheng, A. Merchant, J. Laninga, Z. X. Xiang, K. Alshaebi, N. Arellano, H. Romaniuk, S. Fai, D. H. Sun
|
https://doi.org/10.5194/isprs-archives-xlviii-m-2-2023-1671-2023
| 2,023
|
CC-BY
|
isprs/70722865_8f9d_4b91_a6ef_93dec2f4dc97.md
|
Synergistic use of Sentinel-1 and Sentinel-2 time series for polar plantations monitoring at large scale
[PERSON]
[PERSON]
Centre National de la Propriete Forestiere, Institut pour le Developpement Forestier, Bordeaux, France
[PERSON]
Universite de Toulouse, INRAE, UMR DYNAFOR, Castanet-Tolosan, France
[PERSON]
Universite de Toulouse, INRAE, UMR DYNAFOR, Castanet-Tolosan, France
[PERSON]
Universite de Toulouse, INRAE, UMR DYNAFOR, Castanet-Tolosan, France
###### Abstract
The current context of availability of Earth Observation satellite data at high spatial and temporal resolutions makes it possible to map large areas. Although supervised classification is the most widely adopted approach, its performance is highly dependent on the availability and the quality of training data. However, gathering samples from field surveys or through photo interpretation is often expensive and time-consuming especially when the area to be classified is large. In this paper we propose the use of an active learning-based technique to address this issue by reducing the labelling effort required for supervised classification while increasing the generalisation capabilities of the classifier across space. Experiments were conducted to identify poplar plantations in three different sites in France using Sentinel-2 time series. In order to characterise the age of the identified poplar stands, temporal means of Sentinel-1 backscatter coefficients were computed. The results are promising and show the good capacities of the active learning-based approach to achieve similar performance (Poplar F-score \(\geq\) 90%) to traditional passive learning (i.e. with random selection of samples) with up to 50% fewer training samples. Sentinel-1 annual means have demonstrated their potential to differentiate two stand ages with an overall accuracy of 83% regardless of the cultivar considered.
P : polar plantations, Active learning, Large scale, Mapping, SAR, Stand age +
Footnote †: Corresponding author
## 1 Introduction
Poplar (Poplus spp.) is a fast-growing and wood producing tree which is considered as an important economic resource due to the increasing demand for its by-products such as lightweight packaging and plywood. In France, popular cultivation is a key local industry. However, over the last two decades this sector has faced several economic, social and environmental up-heavals that have led to a continuous decrease in planted surfaces. The future of poplar depends mainly on areas replated after harvesting. It is therefore crucial to have spatially explicit information on newly planted and lost areas, which provides essential baseline data for industrial and socio-economic dynamics. Accurate and updated maps of poplar plantations are not yet available at the national scale. The update rate of the French forest database (10 to 20 years) is unsuitable for this species because of its short rotation cycle (15 years on average). The availability of Sentinel data with high spatial and temporal resolutions has provided new opportunities for identifying and characterising poplar plantations over large areas.
Several works have already demonstrated the potential of remotely sensed data for mapping plantations and few have particularly focused on poplars like ([PERSON], [PERSON], 1981), ([PERSON] et al., 1993) and ([PERSON] et al., 2003). Nevertheless, in most cases, the studies were conducted at a local scale and the reported performances were highly dependent on the data used to train and validate the classification models. Consequently, these models generally exhibit limited generalisation capabilities and their application over large areas remains challenging.
In this paper, Sentinel-2 optical time series are used to differentiate poplars from other deciduous species in three study sites. In order to minimise the number of samples required for training and to build a generic model tailored to the different study sites, we propose the use of a transfer learning-based approach, namely Active Learning. Secondly, we were interested in characterising the age of poplar plantations. Due to the sensitivity of SAR to vegetation structure, we explored the potential of Sentinel-1 data to distinguish between two main stand ages.
## 2 Popular identification with Sentinel-2 time series
In this section, we investigate the potential of Sentinel-2 data to identify poplar plantations. We first assess its ability to recognise the plantations at a local scale (Sentinel-2 tile) (Section 2.3). We secondly focus on the adaptation of the resulting local classifiers to distinct areas with active learning (Section 2.4).
### Study area
Three poplar sites with contrasting silvicultural practices and climatic conditions were chosen with the forest partners. They are located in northeastern, central and southwestern France and are covered by three Sentinel-2 tiles with a surface area of 100 km\({}^{2}\) each (tile codes respectively: 31 UEQ, 30 TYT and 31 TCJ) (Figure 1).
### Reference data
Samples for deciduous forest classes were retrieved from the national forest database (BD Foret(\(\beta\))IGN). A photo interpretation was conducted to collect poplar references in order to ensure updated samples. It should be noted that the poplar samples created correspond to plantations older than two yearsold because below this limit, the plantations cannot be differentiated. Regardless of the photo interpretation's bias, this phase is considerably time-consuming because it needs to be performed repeatedly for each study site until achieving a statistically representative data set.
### Tile-scale classification with Sentinel-2
Sentinel-2 optical time series from 2017 were downloaded from Theia platform over the three study tiles (Table 1). They are level 2A products (surface reflectance) provided with a cloud mask and after atmospheric correction.
A temporal gap-filling was applied to the Sentinel-2 images in order to replace cloudy pixels with an interpolated value based on the nearest cloud-free pixels of the temporal time series ([PERSON], 2016). The time series were then resampled at 10-m spatial resolution with a 10-day time step common to all tiles. In order to assess the potential of Sentinel-2 to recognise polar plantations at local scale in the same tile, a random forest (RF) supervised classification was performed in each tile independently.
Reference polygons were randomly split into 50% for training and 50% for testing with a stand-based stratified random sampling. Sampling was repeated 30 times in order to account for variability related to random selection. However, with this approach we assume that samples are available everywhere which is not practical at large scale. It was necessary to define a more automatic process able to detect popls with good classification performances but with a minimum of training samples. It is in this context that we propose the use of Active Learning (AL).
### Towards a large-scale generalisation with AL
Over the past decade, AL has received the attention of the remote sensing community ([PERSON] et al., 2011). It has mostly been applied to select a reduced set of training samples required for classification tasks ([PERSON] et al., 2009), ([PERSON] et al., 2018). Only a few works have addressed transfer learning between two distant regions ([PERSON] et al., 2012), ([PERSON] et al., 2014).
AL is based on the assumption that an algorithm is able to reach better classification results not by focusing on the number of samples (randomly selected), but rather on their quality while choosing the most relevant ones ([PERSON], 2012). The AL process uses a ranking criterion to select in an intelligent way the most informative samples from a pool of candidates. The selection is guided by the algorithm needs which is iteratively enriched with new and carefully chosen training samples until certain predefined stopping criteria are met (e.g. maximum score or maximum number of iterations).
In this study, AL was performed between the three Sentinel-2 tiles (in pairs) along the six possible directions of learning (north-east to south-west, south-west to center, etc.). In each case, the process started with a trained classifier on a first tile (source) and is used to predict a second one (target). At each iteration 10 samples were queried until reaching a maximum of 1000 samples (100 iterations). Uncertainty was used as an informativeness criterion to select the best samples (i.e. the most uncertain instances are considered to be the most informative) and two measures were tested: entropy (H) and margin sampling (MS). As detailed in Equation 1, while entropy uncertainty measure takes the probabilities of belonging to all the model classes into account, the MS metric considers only the first two most probable labels (Equation 2).
\[x_{H}^{*}=\operatorname*{argmax}_{x}-\sum_{y}P_{b}(y\mid x)\log P_{b}(y\mid x) \tag{1}\]
\[x_{MS}^{*}=\operatorname*{argmin}_{x}\big{[}P_{b}(\hat{y_{1}}\mid x)-P_{b}( \hat{y_{2}}\mid x)\big{]} \tag{2}\]
where \(x^{*}\) = the best instance selected \(y\) = all possible labels of \(x\)
\(P_{b}\) = the probability value under the model \(\theta\)
\(\hat{y_{1}}\),\(\hat{y_{2}}\) = first and second most probable labels
In this paper we present only the results obtained with MS uncertainty metric. For comparison purposes, the AL process was run against a classifier using the same number of samples but randomly selected.
## 3 Stand age retrieval with Sentinel-1
Several studies have demonstrated the sensitivity of SAR information and in particular C-band data to monitor vegetation dynamics due to their sensitivity to structure ([PERSON], [PERSON], 1993), ([PERSON] et al., 2014), ([PERSON] et al., 2017), ([PERSON] et al., 2018). Specifically, the VH/VV ratio has proved to be of great interest for monitoring the vegetation growth cycle and showed a strong correlation with NDVI ([PERSON] et al., 2018), ([PERSON] et al., 2018).
In this section, we investigate the potential of Sentinel-1 for stand age assessment.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Tile** & **Relative orbit** & **No. dates** \\
**code** & **number** & **in 2017** \\ \hline
31 UEQ & 51 & 26 \\
30 TYT & 94 & 34 \\
31 TCJ & 51 & 36 \\ \hline \end{tabular}
\end{table}
Table 1: Properties of the Sentinel-2 tiles used in the study
Figure 1: Overview of the three study sites represented by three Sentinel-2 tiles: 31 UEQ, 30 TYT and 31 TCJ.
Field data for 94 poplar plots located in the south of France were provided by the forest partners (Table 2). For each plot, the plantation year as well as the cultivar type were reported. The poplar stands are between two and seven years old, but not all ages are available for each cultivar.
Sentinel-1 Ground Range Detected (GRD) data from 2017 were downloaded from the French distribution and processing platform (PEPS) over the study zone. They were acquired in Interferometric Wide swath mode (IW) in an ascending orbit direction (Relative orbit 30). The images were calibrated, ortho-rectified and filtered from speckle noise with a spatio-temporal filter using the S1-Tilting tool ([PERSON], 2017). Annual means of radar backscatter coefficients were then calculated for VV and VH polarisations as well as their ratio VH/VV. A color composite image of the three derived means is shown in Figure 2 with examples of poplar plots at different ages.
Considering the lack of representativeness of age classes and the scarcity of plots per cultivar, a supervised random forest classification was carried out on the entire data set in order to identify two main age groups: young plantations (two to four years old) and mature plantations (four to seven years old). The classification was performed at two scales: plot scale (the plot is defined by the average value of all its pixels) and pixel scale (all pixels are considered regardless of the plot).
As in the previous section, samples were randomly split into 50% for training and 50% for testing with a stand-based stratified random sampling. Sampling was repeated 30 times in order to account for variability related to random selection.
## 4 Results and Discussion
Sentinel-2-based classification: high capacity to identify polar plantations at local and global scales
The random forest classification results are reported in Table 27 for each of the three study tiles. As mentioned earlier, the aim is to assess the potential of Sentinel-2 data to discriminate poplar plantations from the other deciduous species at a local scale, in this case the tile extent.
The local classification results of Table 27 showed high capacity of Sentinel-2 data to identify poplars with F-score values of 90%, 99% and 98% respectively for the north-eastern, central and south-western tiles. However, when we tested the predictive capabilities of these different local models, low accuracies were obtained. This is due to the non-stationarity of class distributions between the different study tiles. When we adapted the initial models with active learning based on the margin sampling uncertainty metric (AL\({}_{MS}\)), the performance increased rapidly as samples were added but considerably faster than with a random selection of samples. The OA varied according to the adopted direction of the transfer (i.e. according to the tile on which the initial model is trained) but in all cases its values were up to 5.5% higher with AL\({}_{MS}\). An example is given in Figure 3 for a transfer from the north-eastern to the south-western tile. As it can also be observed, the OA values computed on the initial tile (source) remain fairly constant while adding new samples from the target for both AL\({}_{MS}\)
\begin{table}
\begin{tabular}{|l|c|c|} \hline
**Cultivar** & **No. plots** & **No. pixels** \\ \hline Koster & 34 & 3083 \\ I45/50 & 28 & 3708 \\ I214 & 22 & 934 \\ Soligo & 7 & 589 \\ Raspalje & 3 & 760 \\ \hline \end{tabular}
\end{table}
Table 2: Field data summary
\begin{table}
\begin{tabular}{|l|c|c|c|c|} \hline
**Title** & **No. samples\({}^{\dagger}\)** & **No.** & **OA** & **Popular** \\
**code** & **per class** & **classes\({}^{2}\)** & (\({}_{30}\)) & **F-score\({}_{(30}\))** \\ \hline
31 UEQ & 1250 & 6 & 73.7\% & 89.5\% \\
30 TYT & 2000 & 6 & 74.9\% & 99.3\% \\
31 TCJ & 3850 & 6 & 80.0\% & 97.9\% \\ \hline \end{tabular}
* Training samples represent 50% of the available reference data. The number is given in pixels of 10m\({}^{2}\) area.
* The classes are the same for both central (3 OTYT) and southwest-ern (3 ITCI) tiles: _polariz. locust, chestnut, oak, open mixed forest_ and _closed mixed forest_. The _chestnut_ is absent in the northeastern (31 UEQ) tile but there is instead the _breck_ class.
\end{table}
Table 3: Local classification results for each Sentinel-2 tile averaged over 30 independent repetitions.
Figure 3: Changes in the average OA scores on the south-western target tile according to the number of added samples with the active learning (green) and random (red) models. The initial classifier is trained on the north-eastern source tile.
Figure 2: Multi-polarisation color composite of annual means of Sentinel-1 backscatter coefficients (Red: \(\sigma^{\alpha}_{VV}\), Green: \(\sigma^{\alpha}_{VH}\), Blue: \(\sigma^{\alpha}_{VH}\)).
and random selection. After querying new samples, the initial model improved its generalisation capabilities to better classify the target tile while still performing as well on the source tile.
When we focused on the poplar class and for the same F-score, the number of samples randomly selected was about eight times higher than those queried by AL\({}_{MS}\). Furthermore, the number of AL\({}_{MS}\) queries was related to the starting F-score. As shown in the example of Figure 4, when this value was high enough (i.e. the initial model was able to accurately recognise the poplar plantations of the second tile), very few poplar samples were selected by AL\({}_{MS}\) (green bars) unlike the random strategy which selected many samples (red bars) regardless of the initial performance (poplar F-score = 95%).
AL\({}_{MS}\) minimised the need to label target poplar samples without sacrificing the classification performance. As reported in [21], when the class accuracy is high, the active learner avoids querying irrelevant samples.
Similarly to the poplar class, we have assessed the contribution of AL for the remaining deciduous classes. We particularly noted its potential regarding the hardest classes to discriminate (i.e. missing classes from the initial model or highly overlapping classes) as already demonstrated in [11]. The AL results based on entropy uncertainty metric were worse than with margin sampling. Indeed, as entropy takes the probabilities of belonging to all the classes into account, the AL selection is influenced by low probabilities of unimportant classes and is consequently less robust to noise.
### SAR sensitivity to stand age classes
The annual means of the backscatter coefficients (VV, VH and VH/VV) were analysed and their relation to field data (stand ages) was evaluated at both the plot and pixel and scales. Determination coefficients (r\({}^{2}\)) describing this relationship are reported in Table 4.
Whether the analysis was performed for plots or pixels, a weak correlation was observed with the VV polarisation. The VH/VV ratio was however capable of reproducing 67% of observed variability in the plots and 42% when considering all the pixels. These results are inline with the classification results reported in Table 5.
The observed misclassifications could be related to the differences between the development stages according to the considered cultivar. Indeed, for a given cultivar the growing can be fast since the first year of plantation whereas for others it can take three to five years to get a detectable canopy. A classification was therefore carried out for each cultivar. Overall accuracies (pixel-based classification) ranged from 65% to 99% but as reported before, the number of samples available per cultivar was limited and not sufficiently representative of the two stand age classes.
## 5 Conclusion
In this letter, we proposed a combined use of optical and SAR imagery to monitor poplar plantations. The active learning approach showed promising results on Sentinel-2 data for identifying poplar plantations in two contrasting study sites by taking
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline \(\mathbf{r}^{2}\) & **VV** & **VH** & **VH/VV** \\ \hline Plots & 0.05 & 0.59 & **0.67** \\ Pixels & 0.06 & 0.43 & **0.42** \\ \hline \end{tabular}
\end{table}
Table 4: Determination coefficients (r\({}^{2}\)) between annual means of Sentinel-1 backscatter coefficients (dB) and the field-derived stand ages.
Figure 4: Changes in the average poplar F-score on the south-western target tile according to the number of added samples. The initial classifier is trained on the north-eastern source tile.
Figure 5: Plots-based correlation between field stand ages and VH/VV annual backscatter coefficient (r\({}^{2}\)=0.67).
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline
**Features** & **Classification** & **OA\({}_{(*30)}\)** & **F-score\({}_{(*30)}\)** \\ \hline VV+VH & Plot-based & 82.5\% & 81.7\% \\ +VH/VV & Pixel-based & **83.1\%** & **83.0\%** \\ \hline VH/VV & Plot-based & 79.7\% & 79.6\% \\ & Pixel-based & 78.1\% & 78.0\% \\ \hline \end{tabular}
\end{table}
Table 5: Pixel and plot-based classification results with annual SAR backscatter coefficients.
advantage of the initial knowledge gained during local classification tasks and querying only if necessary a reduced set of relevant training samples. It is important to note that the AL may be penalised by noisy references that are most likely to be selected by the algorithm. Compared to entropy, the margin sampling uncertainty measure is more robust to noise. These results are very promising and have opened up interesting leads for national transfer. Temporal means of Sentinel-1 backscatter coefficients demonstrated their sensitivity to the plantation structure and their potential to differentiate two main stand ages particularly with the VH/VV ratio. Sentinel-1 temporal information could be further leveraged by filling the gaps in optical series during cloudy periods, creating forest mask based on seasonal backscattering means or by calculating phenological features to discriminate cultivar groups.
## References
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2014. Large-Scale Image Classification Using Active Learning. _IEEE Geoscience and Remote Sensing Letters_, 11(1), 259-263.
* [PERSON] et al. (1993) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 1993. Evaluation of SPOT and TM Data for Forest Stratification: A Case Study for Small-Size Polar Stands. 31(2), 483-490.
* [PERSON] and [PERSON] (1993) [PERSON], [PERSON], 1993. Multi-Temporal, Multi-Frequency Radar Measurements of Agricultural Crops during the Agriscatt-88 Campaign in The Netherlands. _International Journal of Remote Sensing_, 14(8), 1595-1614.
* [PERSON] and [PERSON] (1981) [PERSON] [PERSON], [PERSON] [PERSON], 1981. The application of remote sensing to poplar growing: identification and inventory of poplar grows, prediction of timber production; France, Italy. _Revue Feroise Francaise_, 33(6), 478-493.
* [PERSON] and [PERSON] (2011) [PERSON], [PERSON], 2011. Critical class oriented active learning for hyperspectral image classification. _2011 IEEE International Geoscience and Remote Sensing Symposium_, IEEE, Vancouver, BC, Canada, 3899-3902.
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2018. Potential of Sentinel-1 Data for Monitoring Temperate Mixed Forest Phenology. _Remote Sensing_, 10(12), 2049.
* [PERSON] et al. (2003) [PERSON], [PERSON], [PERSON], [PERSON], 2003. A Per-Segment Approach to Improving Aspen Mapping from High-Resolution Remote Sensing Imagery. _Journal of Forestry_, 101(4), 29-33.
* [PERSON] (2016) [PERSON], 2016. Obb Gapfilling, A Temporal Gapfilling For Image Time Series Library.
* [PERSON] et al. (2009) [PERSON], [PERSON], [PERSON], 2009. Multi-class active learning for image classification. _2009 IEEE Conference on Computer Vision and Pattern Recognition_, 2372-2379.
* [PERSON] (2017) [PERSON], 2017. S1 TemporalSeries. [[http://tully.ups-tse.fr/koleck/ts1](http://tully.ups-tse.fr/koleck/ts1) tiling]([http://tully.ups-tse.fr/koleck/ts1](http://tully.ups-tse.fr/koleck/ts1) tiling).
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2018. Active Learning for Object-Based Image Classification Using Predefined Training Objects. _International Journal of Remote Sensing_, 39(9), 2746-2765.
* [PERSON] et al. (2012) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2012. SVM-Based Boosting of Active Learning Strategies for Efficient Domain Adaptation. 5(5), 1335-1343.
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2014. C-Band SAR Data for Mapping Crops Dominated by Surface or Volume Scattering. 11(2), 384-388.
* [PERSON] (2012) [PERSON], 2012. Active Learning. _Synthesis Lectures on Artificial Intelligence and Machine Learning_, 6(1), 1-114.
* [PERSON] et al. (2009) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], [PERSON], 2009. Active Learning Methods for Remote Sensing Image Classification. _IEEE Transactions on Geoscience and Remote Sensing_, 47(7), 2218-2232.
* [PERSON] et al. (2011) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2011. A Survey of Active Learning Algorithms for Supervised Remote Sensing Image Classification. 5(3), 606-617.
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], [PERSON], [PERSON], [PERSON], 2017. Understanding the Temporal Behavior of Crops Using Sentinel-1 and Sentinel-2-like Data for Agricultural Applications. _Remote Sensing of Environment_, 199, 415-426.
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2018. Sensitivity of Sentinel-1 Backscatter to Vegetation Dynamics: An Austrian Case Study. _Remote Sensing_, 10(9), 1396.
|
isprs
|
SYNERGISTIC USE OF SENTINEL-1 AND SENTINEL-2 TIME SERIES FOR POPLAR PLANTATIONS MONITORING AT LARGE SCALE
|
Y. Hamrouni, É. Paillassa, V. Chéret, C. Monteil, D. Sheeren
|
https://doi.org/10.5194/isprs-archives-xliii-b3-2020-1457-2020
| 2,020
|
CC-BY
|
isprs/78a31f04_7bf6_4a66_b239_7d39548f95c4.md
|
###### Abstract
Accurate localization of multi-scattering features of cable-stayed bridges in multi-band Synthetic Aperture Radar (SAR) imagery is crucial for intelligent recognition of bridge targets within images, as well as for precise water level extraction. This study focuses on the Badong Yangtze River Bridge, utilizing Unmanned Aerial Vehicle (UAV) LiDAR data of the bridge, and analyzes the multi-scattering characteristics of different bridge structural targets based on Geometric Optics (GO) methods and the Range-Doppler principle. Furthermore, the study integrates LiDAR data of the bridges' cable-stays to examine their multi-scattering phenomena, finding that the undulations of the Yangtze River's surface waves significantly contribute to the pronounced double scattering features of the bridge's cable-stays. Additionally, statistical analysis of multi-source SAR data indicates that this phenomenon is not directly correlated with radar wavelength, implying no direct connection to surface roughness. Utilizing LiDAR point cloud data from the bridge's street lamps, this paper proposes a novel method for estimating water level elevation by identifying the center position of spots formed by double scattering from lamp posts. The results show that using TerraSAR ascending and descending orbit images, this method achieves a water level elevation accuracy of approximately 0.2 meters.
SAR, Cable-Stayed Bridge, Multiple Scattering, Multi-band, Range-Doppler,Water Level
## 1 Introduction
Bridges, as indispensable man-made structures in transportation, have their status and safety as critical observational indicators for bridge monitoring([PERSON] et al. 2006). Synthetic Aperture Radar (SAR), with its all-weather capability, has become a vital tool in bridge research. Bridge structures are complex, and the scattering of SAR signals at various structural points creates multiple bright lines in images([PERSON] et al. 1995). [PERSON] et al. ([PERSON] et al. 2008) compared differences between aquatic and terrestrial bridges, noting that double and third scattering caused by interactions between the bridge and land are usually undetectable in SAR images. [PERSON] et al. simulated the multi-scattering phenomena of large cable-stayed bridges using polarized SAR data and point scatter models([PERSON] et al. 2006). Additionally, bridge water surface heights can be inverted using airborne polarized SAR data. [PERSON] et al. simulated aquatic bridges in SAR images using a mapping projection algorithm and successfully located them in the images through the Hough transform. Bridge multi-scattering features are related to multipath scattering mechanisms ([PERSON] et al. 2009), and high-resolution imagery enables more detailed observations, facilitating the inversion of bridge parameters ([PERSON] et al. 2008;[PERSON], et al 2015). [PERSON] et al. studied the imaging characteristics of non-linear bridges, focusing on the effects of azimuth and surface roughness([PERSON] et al. 2017). With the increasing launch of high-resolution SAR satellites, extracting multi-scattering lines from SAR images using theHough transform to estimate water level changes has become routine([PERSON] et al. 2021; [PERSON] et al. 2022). Although previous studies primarily focused on the images rather than the actual structure of the bridge, knowing the exact geographical coordinates of a bridge allows one to accurately determine its position in SAR images through the Range-Doppler (R-D) model([PERSON] 2017; [PERSON] 2020;[PERSON] et al. 2021). This paper, utilizing LiDAR point cloud data from the Badong Yangtze River Bridge, proposes a simulation method for bridge multi-scattering features based on bridge LiDAR data and the R-D model. It also introduces a novel method to estimate river water levels using double scattering from lamp posts on the bridge in high-resolution SAR images, which has been experimentally validated in the Badong Yangtze River Bridge area.
## 2 Study area and data sources
The research area, Badong Yangtze River Bridge, is located in the Three Gorges Reservoir area of the middle Yangtze River in Badong County, Hubei Province, China. The bridge consists of the main bridge, two bridge towers, cable-stays, piers, and bridge deck infrastructure. The main bridge section is oriented from northwest to southeast, with a total length of 900.6 meters and a main span of 388 meters, forming an angle of 20.6 degrees with true north.
Figure 1: Photos of Badong Yangtze River Bridge
Figure 2 displays the satellite imagery of the Badong Yangtze River Bridge. (a) shows the optical image of the bridge, while (b) and (c) are the SAR satellite images from ascending and descending orbits, respectively. Comparing the optical and SAR images reveals noticeable differences in the bridge’s representation. In the SAR images, the bridge imaging can be roughly divided into three parts, each caused by scattering from different bridge structures.
## 3 Methodology
The SAR satellite orbits have different ascending and descending flight directions, none of which align with the orientation of the Badong Yangtze River Bridge. There is an angular deviation between the radar line of sight and the direction of the bridge. As shown in Figure 3, the conversion relationship between the distance in pixels (N) of the bridge deck scattering lines in the SAR images and the actual spacing (L) of the bridge railings is as follows:
\[\theta_{a}=\psi_{1}-\psi \tag{1}\]
\[\theta_{a}=\psi_{1}+\psi-180^{*} \tag{2}\]
\[Ls_{a}=L/\cos\ \theta_{a} \tag{3}\]
\[Ls_{d}=L/\cos\ \theta_{d} \tag{4}\]
\[N=Ls*s\eta\eta/\text{R} \tag{5}\]
where \(\theta_{a}\), \(\theta_{d}\)= the angles between the radar line of sight and the bridge direction for the ascending and descending orbits
\[\psi=\text{the angle between the SAR satellite's flight direction and the north}\]
\[\psi_{2}=\text{the angle between the bridge direction and the north}\]
\[L=\text{the actual spacing between the bridge railings}\]
\[Ls_{a},\ LS_{d}=\text{the radar line of sight distances between the bridge railings under ascending and descending orbits}\]
\[\text{N}=\text{the difference in distance-oriented pixels of the bridge deck scattering lines in the SAR images}\]
\[\eta=\text{the incidence angle of the SAR satellite}\]
In this section, we will introduce the combination of bridge LiDAR point cloud data and Range-Doppler frequency shift to simulate the multi-scattering phenomena of bridges and the use of SAR images to measure river water levels through geometric models and mathematical expressions. If the direction of the bridge is not perpendicular to the satellite orbit, three bright parallel stripes will appear in high-resolution SAR images, representing: bridge deck scattering, double scattering between the bridge and the water surface, and third scattering involving the water surface, bridge, and water surface again. This phenomenon is due to the relative positions of the bridge, satellite, and water level, the reflection and refraction of microwaves, and the geometric configuration captured by synthetic aperture radar.
The Range-Doppler model is based on the imaging mechanism of SAR imagery, utilizing the sensor-target distance, echo signal Doppler, and the Earth's ellipsoidal equation to establish a precise geometric relationship between the image pixel coordinates and the ground target points.
As shown in Figure 4(a), S represents the SAR satellite, and T is the target point on the Earth's surface. The distance \(\mathbf{R}_{\text{st}}\) determined by the instantaneous position of the SAR sensor and the target can be represented as:
\[\mathbf{R}_{\text{st}}=|\mathbf{R}_{\text{s}}-\mathbf{R}_{\text{t}}|=\sqrt{( \mathbf{\textit{X}}_{S}-\mathbf{\textit{Y}}_{T})^{2}+(\mathbf{\textit{Y}}_{S}- \mathbf{\textit{Y}}_{T})^{2}+(\mathbf{\textit{Z}}_{S}-\mathbf{\textit{Z}}_{T}) ^{2}}(\mathbf{\textit{Z}}_{S})\]
where \(\mathbf{R}_{\text{s}}(\mathbf{\textit{X}}_{S}\mathbf{\textit{Z}}_{S}\mathbf{ \textit{Z}}_{S})^{2}\) is the position vector of the SAR satellite \(\mathbf{R}_{\text{t}}(\mathbf{\textit{X}}_{T}\mathbf{\textit{Y}}_{T})^{2}\) is the position vector of target point T
As the radar beam passes the target, there is relative motion between the SAR sensor position S and the ground target T. Due
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multirow{2}{*}{SARTIME} & \multirow{2}{*}{\(\mathbf{\textit{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text{\texttexttext{ \
to the Doppler effect, the frequency of the radar echo signal shifts, and the Doppler shift is given by:
\[f_{0}=-\frac{2}{\lambda}\frac{dR}{dt}=-\frac{2}{\lambda}\frac{(\mathbf{R}_{ \mathbf{s}}-\mathbf{R}_{\mathbf{l}})(\mathbf{V}_{\mathbf{s}}-\mathbf{V}_{ \mathbf{l}})}{|\mathbf{R}_{\mathbf{s}}-\mathbf{R}_{\mathbf{l}}|} \tag{7}\]
where \(\lambda=\) the radar wavelength
\(f_{0}=\) the Doppler center frequency for that point
\(\mathbf{R}_{\mathbf{t}},\mathbf{V}_{\mathbf{t}}=\) the position vector and velocity vector of point T
\(\mathbf{R}_{\mathbf{t}},\mathbf{V}_{\mathbf{t}}=\)the position vector and velocity vector of the satellite at the imaging moment of point T
Using LiDAR data, geographical coordinates (\(\lambda,\ \ \phi,\ \ \mathrm{h}\)) of the bridge target point T can be obtained. Starting from the geographical coordinates of the target point, using known satellite orbit parameters combined with the R-D model, the corresponding image plane coordinates (i, j) can be calculated, where i and j refer to the azimuth and range indices of point target T in the SAR image, respectively. The specific calculation process is as follows:
(1)With the known WGS84 geographical coordinates (\(\lambda,\ \ \phi,\ \ \mathrm{h}\)) of ground target point T, its position vector \(\mathbf{R}_{\mathbf{t}}\) can be obtained through coordinate transformation as \((\mathcal{X}_{T}\mathcal{Y}_{T}\mathcal{Z}_{T})^{\mathrm{T}}\);
(2) Set the azimuth coordinate \(t_{c}\) of the image center point under the radar coordinate system as the initial value of the image plane coordinate of ground target T, and calculate its corresponding azimuth time \(t_{i}\);
\[t_{i}=t_{ax}+\frac{t_{c}-1}{PRF} \tag{8}\]
where \(t_{ax}=\) the start time of the first line of the SAR image
\(i_{c}=\) the azimuth coordinate of the image center point
PRF= the pulse repetition frequency
(3) Based on the azimuth time \(t_{i}\), interpolate to calculate the sensor's position vector \(\mathbf{R}_{\mathbf{s}}\) and velocity vector \(\mathbf{V}_{\mathbf{s}}\);
(4) Substitute the sensor's position \(\mathbf{R}_{\mathbf{s}}\) and velocity \(\mathbf{V}_{\mathbf{s}}\) as well as the ground target point coordinates \(\mathbf{R}_{\mathbf{t}}\) into Equation (7) to calculate the Doppler frequency value \(f_{0}\), and simultaneously calculate the Doppler frequency reference value \(f_{d}\) based on the azimuth time, solving for the change in azimuth time \(d_{t}\):
\[d_{t}=\frac{f_{0}-f_{d}}{f_{d}^{\prime}} \tag{9}\]
where \(f_{d}^{\prime}=\) the rate of change of Doppler frequency, which is the derivative of \(f_{0}\) with respect to time \(t_{i}\):
(5) Recalculate the azimuth time \(t_{i}=t_{i-1}+d_{t}\), check if \(d_{t}\) satisfies the set error threshold; if it does, output the image plane row number \(i\); otherwise, repeat steps (3)-(5) until the requirement is met.
(6) Using Equation (6), calculate the slant distance from the SAR sensor position to the ground target point, thereby obtaining its image plane column number \(i\).
Assuming the presence of multi-scattering phenomena at a certain point on the bridge, the geometric principles of such multi-scattering are illustrated in Figure 5. LiDAR can determine the geographical coordinates (latitude \(\lambda\), longitude \(\phi\), height h) of the target point T, with the multi-scattering points T1, T2, T3 in the SAR image representing the image pixel coordinates of points T, O, and T in the SAR image. Points T, O, and T share the same latitude \(\lambda\) and longitude \(\phi\), which can be acquired through LiDAR data, differing only in elevation. O is the vertical projection of point T onto the water surface, and points T and T are symmetrical about point O. Using the previously described R-D model, the row and column numbers in the SAR image can be obtained from the latitude, longitude, and altitude coordinates of the target point. Thus, the SAR image row and column numbers (i, j) for point T1 can be obtained from the geographical coordinates (latitude \(\lambda\), longitude \(\phi\), height h) of point T, allowing for the simulation and validation of the bridge's multi-scattering features combined with LiDAR data.
## 4 Result
### Analysis of Scattering Characteristics of Badong Bridge Deck
Selecting the highest resolution TerraSAR data from multi-source SAR satellite imagery, an average intensity image was produced as shown in Figure 6(a). The scattering characteristics of the bridge in the image are exceedingly complex, with distinct differences in bridge features under ascending and descending orbit conditions. The consistent scattering texture between the orbits presents three bright lines on the bridge deck, which are likely related to the scattering features formed by five rows of symmetrically distributed railins on the bridge surface.
Based on 3D point cloud data from LiDAR scanning, it is known that there are five rows of railins distributed on the deck of the Badong Yangtze River Bridge. These are numbered P1-P5 from west to east. The central railing, approximately 0.6m in height, is used to separate the vehicular lanes; the railing on the riverside of the pedestrian pathway is about 1.1m high, ensuring the safety of pedestrians; the railing closest to the vehicular lane on the pedestrian pathway is approximately 0.4m high. As the pedestrian walkways are symmetrically arranged on the bridge deck, the railins on both sides are also symmetrically distributed. LiDAR point cloud data collected on-site indicates that the bridge deck is 22 meters wide, with the spacing of the railins on either side of the pedestrian pathways being 1.5m (L12, L45), and the spacing between the pedestrian and vehicular lane separating railins being 8m (L23; L43).
Figure 5: Multiple scattering model of bridge structure based on R-D
Figure 6: Average intensity map and LiDAR bridge deck structure
Considering the dihedral corner reflector theory, the bridge railings along with the concrete and asphalt surfaces on the bridge deck create a dihedral corner reflector, which enhances the backscatter signal([PERSON] et al., 2007). By precisely locating the dihedral scattering centers of the 5 railings in both ascending and descending track modes, the accuracy of this theoretical method can be verified. The LiDAR point cloud data for railings P1-P5 are extracted, and based on the R-D positioning model, their latitude and longitude coordinates are converted to SAR coordinates and superimposed onto the SAR image layer. The results, as shown in Figure 6, indicate that the bridge deck scattering is caused by the dihedral corners formed by the deck and railings. Due to the short distance L12 (L45) of only 1.5m, they form a single scattering line in the SAR image, hence only three bridge deck scattering lines are observed.
The Hough Transform is used to extract the scattering lines on the bridge deck and to estimate the range-oriented pixel difference Ne between railings P3 and railings P2/P4 in the SAR image. Using the bridge deck LiDAR and equations (5)-(9), the multi-source SAR data in Table 1 is processed to calculate the theoretical range-oriented pixel difference N between railing P3 and railings P2/P4 in the SAR image, as shown in Table 2.
### Analysis of multiple scattering characteristics of stay cables
Compared to the Great Belt Bridge studied by [PERSON], the diameter of the stay cables of the Badong Yangtze River Bridge is only 0.2m, which is significantly smaller than the main cable diameter of the Great Belt Bridge. By combining on-site LiDAR point cloud data, the point clouds of the stay cables of the Badong Yangtze River Bridge were filtered to extract their geographic coordinates (latitude and longitude) and elevation (h) under the WGS84 coordinate system. Using the R-D positioning model, the pixel positions in the SAR images were obtained. Water level data from the dates when the SAR images were captured were collected, and combined with formulas (6) to (9) and the latitude, longitude, and elevation data of the stay cables, as well as the water level elevations, to simulate the multiple scattering characteristics of the Badong Yangtze River Bridge stay cables. The results are shown in Figure 8, which indicates that the simulated results are consistent with the actual multiple scattering imaging of the stay cables in the SAR images.
Figure 8 shows the imaging of the Badong Yangtze River Bridge in multi-source SAR images. Panels (a), (b), and (c) display TerraSAR-X images, where variations in the multiple scattering imaging of the stay cables can be observed, including weak double scattering signals, strong signals, and signal expansion. In panels (d) and (e) with C-band Radarsat-2 ascending and descending track images, and (f) with L-band
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & DES & 5.47 & 5.71 \\ \cline{2-4}
**ALOS-2** & **ASC** & 4.40 & 4.52 \\ \cline{2-4} & DES & 5.58 & 5.70 \\ \hline \end{tabular}
\end{table}
Table 2:
Figure 7: Badong Yangtze River Bridge deck scattering simulation diagram
The Hough Transform is used to extract the scattering lines on the bridge deck and to estimate the range-oriented pixel difference Ne between railings P3 and railings P2/P4 in the SAR image. Using the bridge deck LiDAR and equations (5)-(9), the multi-source SAR data in Table 1 is processed to calculate the theoretical range-oriented pixel difference N between railing P3 and railings P2/P4 in the SAR image, as shown in Table 2.
Figure 8: Simulation diagram of multiple scattering of the cable-stayed cables. Among them (a) (b) are the simulation diagrams of multiple scattering of the stay cable under the lifting track respectively, (c) (d) (e) are the primary scattering, second scattering and third scattering of the stay cable under the lifting track respectively. Scattering simulation
ALOS-2 ascending track images, it is difficult to discern the primary and third scattering features of the stay cables. However, the double scattering features of the stay cables are still observable in the majority of the images.
According to the aforementioned classification standards for TerraSAR-X images, the results are as follows:
In high-resolution TerraSAR-X images, the double scattering of the stay cables on both sides of the Badong Yangtze River Bridge is clearly observable, indicating that the stay cables on both sides participate in primary, double, and third scattering characteristics, all capable of producing reflective signals, with internal diffraction phenomena within the stay cable structure not being prominent. Theoretically, the double scattering of the stay cables does not overlap and is relatively independently distributed, with the imaging location concentrated and the signal stronger compared to primary and third scatterings. The primary scattering features of the stay cables overlap with the bridge surface scattering line signals, presenting an overlay effect; similarly, the third scattering lines overlap with the bridge's third scattering line signals.
There are variations in the multiple scattering features of the stay cables in multi-source SAR images, which are due to differences in image resolution. Only the TerraSAR-X images with better than 1m resolution can clearly distinguish the multiple scattering phenomena of the stay cables. The Radarsat-2 and ALOS-2 images are affected by resolution, and due to the dispersion of primary and third scattering signals of the stay cables, they cannot be recognized in low-resolution images. In contrast, the double scattering signals are independently distributed and relatively concentrated. In complex hydrological scenes like the Yangtze River with significant water surface fluctuations, the fluctuations actually make the double scattering features easier to identify, and resolution differences do not significantly impact the double scattering of the stay cables. Thus, in images with lower resolutions, only the double scattering phenomena of the stay cables can be seen. Therefore, the prominent double scattering features of the stay cables significantly aid in the intelligent recognition and automatic interpretation of the corresponding stay cable bridges in SAR images.
### Target water level estimation by double scattering points
As shown in Figure 11, by selecting parts of the TerraSAR data where the double scattering phenomenon of the stay cables is weak to create an average intensity image, multiple bright spots can be observed along the bridge's double scattering line. These are estimated to be the imaging of double scattering from the bridge's lampposts and the water surface. The bridge lampposts are cylindrical and thus have strong scattering characteristics for radar waves coming from different incident directions. By combining the R-D model with on-site LiDAR geographical coordinates, and simulating the multiple scattering characteristics of the stay cables, the double scattering of the lampposts is simulated to confirm that the bright spots are indeed the double scattering from the lampposts. In the Badong region, the Yangtze River water level fluctuates annually between 145m and 175m, and corresponding positional changes can also be observed in the double scattering from the lamppost points in Figure 11.
Using the R-D model, the latitude and longitude coordinates of each lamppost can be obtained through LiDAR, and by combining equations (6) to (9), the fluctuations of the streetlights with the water surface can be individually simulated. Conversely, by identifying the center position of the double scattering of the lampposts in each SAR image, the water level height can be estimated. The imaging position of each lamppost in the SAR images at different water levels is simulated, with the water level in the Three Gorges Reservoir area fluctuating from 145m to 175m, simulating the double scattering imaging position of the lampposts at every 0.1m interval between 145m and 175m. By checking the closest distance between the simulated points and the actual center position of the lamppost's double scattering in the SAR images, the water level height under the bridge at the time of image capture can be estimated. The clarity of the double scattering imaging of the lampposts is affected by water surface fluctuations; when the water fluctuation is significant, the double scattering imaging of the stay cables can cover the lampposts' double scattering, making it impossible to estimate the water level in such cases.
Figure 11: double scattering simulation of street lamps
Figure 9: Multiple scattering characteristics of stay cables in multi-source SAR
According to Figure 10, in 102 TerraSAR images, only 19 images clearly identify the double scattering phenomenon of the lampposets. The water level values for these 19 images are calculated, with results shown in Table 3.
## 5 Discussion and Conclusion
By integrating LiDAR data with the R-D model, it is possible to accurately calculate the imaging positions of various bridge structures on radar. Simulations of dihedral angle scattering from the bridge's railing surface confirm the involvement of five railings in the formation of three bridge surface scattering lines. Considering differences in satellite orbit direction and incidence angle, calculations of the spacing between bridge surface scattering lines under multiple-source SAR ascending and descending orbits yield theoretical values. The Hough transform is used to extract bridge surface scattering lines and estimate the spacing between them. TerraSAR data achieves the highest precision, with a discrepancy of about 0.1 pixels; discrepancies in other data are about 0.2 pixels.
In the dataset of 102 TerraSAR images, less than 20% show weak double scattering from stay cables. Most of the images indicate that water surface waves significantly affect both double scatterings of the stay cables. In over 40% of the images, water surface waves cause an expansion of the double scattering features of the stay cables, making them very prominent and easy to identify. Therefore, in the Yangtze River Three Gorges Reservoir area, the double scattering phenomenon of stay cables can effectively help in identifying cable-stayed bridges on the water surface.
In the Badong Yangtze River Bridge area, if the geographic coordinates of the bridge's lamp posts are known, the R-D model can be used to estimate the water level. Taking TerraSAR data as an example, the estimated results are fairly consistent with the measured absolute water levels, with an estimation accuracy of about 0.2 meters in ascending and descending orbit images. The accuracy of the water level estimation is better than 20% of the image's spatial resolution, which fits within the acceptable error range. However, only less than 20% of the total number of images are suitable for using this method to estimate water levels, indicating significant limitations of this estimation method in hydrologically complex and dynamically fluctuating environments like the Yangtze River, and not all SAR images are capable of accurate calculations.
By combining LiDAR data, the methods described in this paper can be applied to simulate bridges over water surfaces. Additionally, the water level can be estimated through the double scattering phenomena of specific target points on the bridge surface.
Figure 12: Estimated water level from SAR acquisitions.
## References
* [1] [PERSON], [PERSON] and [PERSON], \"Study on the detection algorithm of bridge over water in SAR image based on fuzzy theory\", The First International Conference on Innovative Computing Information and Control (ICICIC), vol. 3, pp. 641-644, 2006.
* [2] [PERSON] and [PERSON]. Recognition of roads and bridges in SAR images. in Proceedings International Radar Conference. 1995.
* [3] [PERSON] and [PERSON]. Bridge height estimation from combined high-resolution optical and sar imagery. 2008. Beijing, China: International Society for Photogrammetry and Remote Sensing.
* [4] [PERSON], et al., Polarimetric analysis of radar signature of a mammade structure. Asia-Pacific Microwave Conference Proceedings, APMC, 2006.
* [5] [PERSON], [PERSON], Estimation of the bridge height over water using SAR image data. Journal of Remote Sensing, 2009. 13(03): p. 385-390.
* [6] [PERSON], et al., The Characteristics of the Multipath Scattering and the Application for Geometry Extraction in High-Resolution SAR Images. IEEE Transactions on Geoscience and Remote Sensing, 2015. Issue 8(Volume 53): p. Pages 4687-4699.
* [7] [PERSON] [PERSON], [PERSON] [PERSON], [PERSON], et al. Feature extraction and change detection for bridges over water in airborne and spaceborne SAR image data. 2008.
* [8] [PERSON], et al. Influence of azimuth angle and water surface roughness on sar imagery of a bridge. in 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS). 2017. Fort Worth, TX, United states: Institute of Electrical and Electronics Engineers Inc.
* [9] [PERSON], et al., Monitoring river water level using multiple bounces of bridges in SAR images. Advances in Space Research, 2021.
* [10] [PERSON] and [PERSON], Accurate Water Level Measurement in the Bridge Using X-Band SAR. IEEE Geoscience and Remote Sensing Letters, 2022.
* [11] [PERSON]. Research on Key Techniques of Stereo Matching for Stereo-Radargrammetry [D]. China University of Mining and Technology,2017.
* [12] [PERSON]. Optimization of Corner Reflector and Its Application in Spaceborne SAR image[D].Wuhan University,2020.
* [13] [PERSON], et al.Absolute Geolocation of Corner ReflectorsUsing High-Resolution Synthetic Aperture Radar images[J]. Journal of tongji university (natural science),2021,49(08):1202-1210.
* [14] [PERSON] [PERSON], [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], [PERSON], [PERSON] [PERSON], [PERSON], [PERSON] [PERSON] Submillimeter Accuracy of InSAR Time Series: Experimental Validation[J]. IEEE transactions on geoscience and remote sensing. 2007, 45(5): 1142-1153.
|
isprs
|
Analysis of Multiple Scattering Characteristics of Cable-Stayed Bridges with Multi-band SAR
|
Yanhao Xu, Yangmao Wen, Tao Li, Sijie Ma, Jie Liu
|
https://doi.org/10.5194/isprs-archives-xlviii-1-2024-761-2024
| 2,024
|
CC-BY
|
isprs/8d7a792c_decf_499e_b194_bd534b1534e7.md
|
# Object Detection Using Neural Self-Organization
[PERSON]
Department of Photogrammetry and Geoinformatics
Budapest University of Technology and Economics
H-1111 Budapest, Muegyetem rkp. 3, Hungary
[EMAIL_ADDRESS]
###### Abstract
The paper presents a novel artificial neural network type, which is based on the learning rule of the Kohonen-type SOM model. The developed Self-Organizing Neuron Graph (SONG) has a flexible graph structure compared to the fixed SOM neuron grid and an appropriate training algorithm. The number and structure of the neurons express the preliminary human knowledge about the object to be detected, which can be checked during the computations. The inputs of the neuron graph are the coordinates of the image pixels derived by different image processing operators from segmentation to classification. The newly developed tool has been involved in several types of image analyzing tasks: from detecting building structure in high-resolution satellite image via template matching to the extraction of road network segments in aerial imagery. The presented results have proved that the developed neural network algorithm is highly capable for analyzing photogrammetric and remotely sensed data.
Neural networks, Object detection, Modeling, Data structure
## 1 Introduction
Artificial neural networks have quite long history. The story has started with the work of [PERSON] and [PERSON] in 1943 ([PERSON] 1993). Their paper presented the first artificial computing model after the discovery of the biological neuron cell in the early years of the twentieth century. The [PERSON] paper was followed by the publication from [PERSON] in 1953, in which he focused on the mathematics of the new discipline ([PERSON] 1953). His perceptron model was extended by two famous scientists in 1969: [PERSON] and [PERSON].
The year 1961 brought the description of competitive learning and learning matrix by [PERSON] ([PERSON] 1989). He published the \"winner-takes-all\" rule, which is widely used also in modern systems. [PERSON] wrote a paper about the biological self-organization with strong mathematical connections ([PERSON] 1973). The most known scientist is [PERSON], who published several books on the _instr_ and _outstar_ learning methods, associative and correlation matrix memories, and - of course - self-organizing (feature) maps (SOFM or SOM) ([PERSON] 1972; [PERSON] 1984; [PERSON] 2001). This neuron model has great impact on the whole spectrum of informatics: from the linguistic applications to the data mining.
The [PERSON]'s neuron model is commonly used in different classification applications, such as the unsupervised clustering of remotely sensed images. The paper of [PERSON] and [PERSON] gives a demonstration, how the SOM model suits for object matching purposes with images of tools under translation, rotation and scale invariant circumstances ([PERSON] 1997).
The goal of automatic road detection is very clear in the paper of [PERSON] et al., who apply a two-level processing technique in combination of road segment extraction and a production net ([PERSON] 1997). [PERSON] et al. describe a context based automatic technique for road extraction ([PERSON] 1997), while [PERSON] and his colleagues developed a road extractor in urban areas ([PERSON] 2001). The research of [PERSON] et al. focuses on the simulated linear features and the detection of paved roads in classified hyperspectral images HYDICE with the use of [PERSON]'s SOM method ([PERSON] 1999; [PERSON] 2001).
## 2 Self Organizing Neural Networks
The self-organizing feature map (SOFM) or self-organizing map (SOM) model is based on the unsupervised learning of the neurons organized in a regular lattice structure. The topology of the lattice is triangular, rectangular or hexagonal. The competitive neurons have a position (weight) vector of dimension \(n\):
\[\mathbf{m}=\left[\mu_{1},\mu_{2},\ldots,\mu_{n}\right]^{T}\in\mathbb{R}^{n} \tag{1}\]
Furthermore the input data points have similar coordinate vector:
\[\mathbf{x}=\left[\xi_{1},\xi_{2},\ldots,\xi_{n}^{x}\right]^{T}\in\mathbb{R}^{n} \tag{2}\]
The learning algorithm consists of two blocks: the first is the rough weight modification, called _ordering_, the second one is the fine settings, called _tuning_. The iterative algorithm starts with the neuron competition: a winner neuron must be found by the evaluation of the following formula:
\[c=\arg\min_{i}\left\{\left\|\mathbf{x}-\mathbf{m},\right\|\right\} \tag{3}\]
where \(i=1\ldots q\in\mathbb{N}\), having \(q\) neurons, and \(c\in\mathbb{N}\).
The second step of the learning algorithm is a weight update for epoch \(t\)+1 in a simple way:
\[\mathbf{m}_{i}\left(t+1\right)=\mathbf{m}_{i}\left(t\right)+h_{ci}\left(t \right)\left[\mathbf{x}-\mathbf{m}_{i}\left(t\right)\right] \tag{4}\]
where \(t\) means the epoch, and the coefficient function is defined as follows:
\[h_{ci}\left(t\right)=\begin{cases}\alpha\left(t\right)&\text{if }\ i\in N_{c} \left(t\right)\\ 0&\text{otherwise}\end{cases} \tag{5}\]
This formula contains the speciality of the Kohonen model, namely the time dependent consideration of the neurons' neighborhood in form of \(N_{c}\left(t\right)\). The neighborhood can be interpreted as concentric squares around the winner in case of a rectangular neuron lattice. If the neighboring neurons are in the acceptable region, their weights will be updated by a factor of \(\alpha(t)\). For this function the limits are known: it must be between 0 and 1, mostly it decreases monotonically.
## 3 The Song Model
As the author of this paper applied the Kohonen type neuron model for road junctions, great anomalies were recognized. This was caused by the regularity of the grid structure, which was quite far from the graph-like road crossings.
The basic idea of the extension of the SOM model was to integrate the graph structure with its flexibility in form. The newly named _Self Organizing Neuron Graph_ (shortly SONG) model has an undirected acyclic graph, where the computational elements are located in the graph nodes. The edges of the graph ensures the connections between the participating neurons. The learning algorithm of the SONG technique has kept the Kohonen rule for winner selection and weight update, but the neighborhood had to be formulated in a different way.
The connections (edges) of the graph are mostly given by the adjacency, commonly in form of a matrix. The elements of the matrix are defined after Equation 6:
\[\mathbf{A}_{ij}=\begin{cases}1&\text{if node i and j are connected}\\ 0&\text{otherwise}\end{cases} \tag{6}\]
This matrix is theoretically a symmetric square matrix of \(q\times q\). The nonzero matrix elements represent the direct relations between two neurons. The term adjacency can be extended in order to express more information than a binary \"direct neighbor\" or not. Therefore the graph algorithms are implemented, which derive the generalized adjacency matrix \(\mathbf{A}^{i}\), where \(k\leq n\). This generalized adjacency matrix has only zero elements in the main diagonal (there are no loops), all the other values give the number of the intermediate edges between two nodes. The most simple generalization algorithms are the _matrix power technique_ ([PERSON] 1973; [PERSON] 1981) and the _Floyd-Warshall method_ ([PERSON] 1962; [PERSON] 2002).
The generalized adjacency matrix makes the modification of the learning coefficient formula for graphs possible:
\[h_{ci}\left(t\right)=\begin{cases}\alpha\left(t\right)&\text{if }\ A_{ci}^{k}<d \left(t\right)\\ 0&\text{otherwise}\end{cases} \tag{7}\]
In the above expression the condition is evaluated for the row of the winner neuron of the matrix.
As it was mentioned regarding the Kohonen model, there is an ordering and a tuning phase. These two computational blocks are inherited in the SONG technique, too. Because the graph adjacency is invariant during the run (the connections of the neurons are fixed), the adjacency matrix can be created prior the iterations ([PERSON] 2003a).
The calculations have been accelerated applying buffering around the neurons. Such buffers are to be interpreted as boundaries for the potential data points, so the distance computations and therefore the whole processor load can be limited. Figure 1 illustrates the generalized adjacency in gray shades and the buffers in case of a letter figured graph.
The described SONG model is based on the adjacency information. During the developments an other type has been created, which is based on the distance between the neurons. In the distance SONG model the generalized distance matrix \(\mathbf{D}^{*}\)is used instead of \(\mathbf{A}^{k}\), but this second algorithm is significantly slower because of the lack of the invariance of the distance matrix: the neuron weights (positions) are permanently changing in every iteration steps, so the distance must be modified ([PERSON] 2004). This results that the distance SONG method can only be applied in refinement of the positions of the adjacency SONG algorithm.
## 4 Results
The application of the SONG technique for image analyzing tasks is discussed in three types of examples. The first example is a template matching use, where the previously given fiducial structure has to be matched. The complexity of the graphs is shown in the second group of experiments, where a given building structure has to be found and the structure of a complex building has to describe in form of a neuron graph. The last tests implement a running window-type application of the SONG algorithm: segments of a road network are detected by a kernel graph.
Figure 1: Neuron graph with buffers, where the neurons are colored by the generalized adjacency values
### Fiducial detection
The interior orientation of an aerial image can be automated if the fiducial marks can be detected without human interaction. The camera manufacturers apply specific figures as fiducials, which have given geometry; they can be described even in graphs. The rough skeleton of the fiducial mark was drawn as a graph (Figure 2a).
In the first application a color Wild RC20 aerial camera image was dropped into RGB components, then the red channel was segmented by histogram threshold. The pixel coordinates of the binary image were the data points for the test run.
Because the fiducials of this camera type are in the corners of the images, only the small image corners were cut out and preprocessed. The result of the algorithm can be seen in Figure 2b.
The ordering algorithm had 100 epochs, the starting and end learning rates were 0.9 and 0.0. The adjacency distance has been decreased from 4 to 1 (direct neighborhood). The tuning had 1000 epochs, 0.1 starting learning rate and zero at the end, while the neighborhood was set back from 2 to 1.
### Detecting building structure
The detection of man-made objects focuses very often on buildings. The SONG method was therefore tested in such tasks. There were two experiments executed: (1) the right position of a given structured building had to be found and (2) the structure of a given (positioned) building had to be detected.
The first test applied one of the first images taken about the Pentagon building in Washington DC after the attack on 11. September 2001. The image was captured by the QuickBird sensor with a ground resolution of 0.6 m. The initial neuron graph was given: the structure of the famous building is known (Figure 3a). The input data points were produced by a maximum likelihood classification of the color image pixels, for that training areas of two roof types were marked and used. The result of the image classification gave a binary image, where the true pixels were the elements of the roof (Figure 3b). The coordinates of these pixels were read out and fed into the SONG algorithm. The ordering phase of the algorithm had 300 epochs, the starting learning rate and neighborhood were 0.01 and 6, while the finishing state had values of 0 and 1 (direct neighborhood) respectively. In the starting step (Figure 3c) the 20 neurons of the graph were placed somewhere within the building, after the 10000 step long tuning (with a learning rate interval of 0.003 - 0 and strict direct neighborhood) the graph has found the building (Figure 3d). During the tuning phase the direct neighborhood ensured, that the neurons merely refined their geometrical positions instead of rough changes.
The iterative evaluation of the 3302 roof pixels took only a couple of minutes on an Intel PIII machine ([PERSON] 2003b).
The other building analyzing test used an another satellite image: a 1 m IKONOS image, taken from Singapore in August 2000. The input data set was established by a rule-based RGB pixel classification, focusing similarly on the roof. In order to get smaller training data set the identified points were resampled. The test applied four given graph structures having 11, 13, 19 and 21 neurons in the nodes (Figure 4).
The four variations were quite similarly controlled: the ordering phase had about 200-600 epochs, the tuning had 500-3000. Learning rate was between 0.01 (ordering) to 0.00001 (tuning). The starting neighborhood was increased from 4 to 10 with the complexity grow of the neuron graph in the ordering; the counterpart tendency was noticed during the tuning with a decreased neighborhood from 2 to 1 ([PERSON] 2003c).
Figure 4: Detecting building structure (Singapore) in IKONOS image – an intermediate state with 13 neuron graph
Figure 3: The Pentagon test
Figure 2: Search of fiducials by SONG.
### Detecting roads
The presented self-organization technique can be applied also in a running-window environment. In this case the necessary structuring (moving) element was a simple cross having 5 processing elements (neurons) and 4 connections. The test image was a 0.4 m ground resolution black-and-white orthophoto about the Frankfurt am Main region. The image cutoff was captured over a sparse village part near to the Frankfurt international airport.
The image preprocessing was solved by thresholding the intensities, so the further steps used binary data. The adequate parameterized technique identified the junctions as it is shown in Figure 5.
The road detection approach has a new type of SONG application: the input image has been partitioned, then the previously defined and in the whole detection procedure fixed structuring element is evaluated by the SONG technique. The developed SONG algorithm was limited for the ordering phase; the tuning was not so important; the processing speed could be accelerated in this way. This means that the structuring element must be created and described before the run. The partitioning is carried out by creating regular square raster on the image. The applied structuring element was the same as in Figure 5, a 5 neuron cross with 4 connections. The adjacency matrix and the rule to build the initial neuron weights were constructed in the starting step.
The main question in the practical implementation was (1) to find the right size of the image partitioning and (2) to get the right parameter set for this special case. Both questions were answered after executing an experiment series with various size partitions and different SONG control parameters. The best result after visual evaluation was a partition size of 20 pixel by 20 pixel (\(8\times 8\) m).
The control parameters have not too large variety, because the given neuron graph was not large; the highest neighborhood could be 2. The free parameters were the starting learning rate and the number of epochs. From earlier experience, these two controls have strong interaction, so during the tests both are varied: learning rates were between 1.0 and 0.5 and the number of epochs between 20 and 500.
The SOM and therefore (because of the inheritance) SONG have an observation: after a \"critical\" number of epochs, the system gets a stable status, ie. the neuron weights don't change anymore. This critical epoch number was searched, and found at 50 for the \(20\times 20\) partitions. Then the effective starting learning rate was found at 1.0.
The essence of the algorithm for this running window style computing model is the following:
Figure 5: Recognizing a four-arm junctions by the cross kernel
Figure 6: Road detection by running window SONG algorithmfor all partitions selecting the data points if the set is not empty reading pixel coordinates for all ordering epochs calculating learning rate and d(x) for all data points for all neurons calculating distance endfor for all neurons if in calculated neighborhood modifying weights endif endfor endfor endfor endfor endfor ```
The above algorithm has strong nested loop structure. The method is therefore computation intensive and sensitive for the loop control parameters. Fixing the number of epochs, neurons, partition size, the number of data points has also great influence on the performance. It was observed, that the smaller sized partitions resulted faster runs, than bigger partitions, which is not unexpected looking at the above algorithm.
The function d(x) is the permanently decreasing neighborhood.
The result of the image analysis can be seen in Figure 6, which shows the regular partitions, too. The structuring elements have detected visually most of the road segments and their junctions, only some small errors were occurred. These errors must be eliminated by implementing constrains for the final neuron graph structure during the processing, which is the topic of the current research.
## 5 Conclusion
The newly developed self organizing neuron graph is a flexible tool in analyzing different types of remotely sensed images. The template matching problem was solved by a given fiducial mark graph. Because the camera manufacturers have own figures for fiducials, the SONG algorithm is capable to fit not only a single neuron graph to the assumed image part containing the fiducial, but a whole series. Measuring the fitness for all matching, the highest fitness identifies furthermore the type of camera. In this way, the SONG technique recognizes the camera itself. The SONG matching is fast, it can provide an alternative solution in the automatic interior orientation.
The building and road detection belong to the object detection approaches, which are the most interesting in the modern digital photogrammetry. The building detection, if we have any preliminary hypothesis about the building structure, is a relational matching task. In this sense, the SONG method can help to realize similar solutions, which are mostly related to the relational matching. The shown example with the Pentagon building can be generalized: if one can describe the structure of a building in form of a graph, it can be found in an image using the developed self-organizing technique; furthermore this building can be \"traced\" in an aerial image strip or block. Only the same preprocessing operators (e.g. classifiers) must be executed, prior the SONG run.
The other presented building detection experiment proved that this method can be used to detect the structure of an unknown building by creating a hypothesis neuron graph and testing its suitability. The given example has shown that this hypothesis testing can be a way to improve the current version of the algorithm. The test was an alternative solution for getting the skeleton of an object, solved by the application of artificial intelligence instead of the known classical skeleton operator.
The most interesting test was the road detection. In this situation a black-and-white orthoimage ensured the necessary data points by simple thresholding, which is a very fast image processing technique. After the segmentation, the SONG algorithm has found the rough road structure. This can be interpreted in two ways.
Once these road segments are objects to perform further grouping methods to obtain real road network. In this way the creation of a classical GIS-type topology, then a follow-up topology analysis and restructuring can lead to the network.
The other possible application of the obtained results is their interpretation as first approximation of the road network. Using this philosophy, the connecting processing steps can be buffering the graph edges. With the fusion of the independent buffers, we will get a subsample of the image, where the probability of the roads in the image is relatively high. Thereafter the different checking algorithms can be executed, which test these image subsamples on containing roads. If the test is positive, the exact road segment and its geometric position can be detected using more accurate methods. In this checking the SONG technique can be involved in the complex algorithm, i.e. with also the tuning phase.
The result image of the road detection contains some noisy parts, especially in urban areas. The method should be used with a preliminary masked input image, which doesn't have any urban land covers. Other alternative would be to establish an urban parameter set, which can have better performance under built-in circumstances.
The last test points to a new possibility for using highly parallelized algorithms in photogrammetric image analysis, because after partitioning the input image, the same steps of the algorithm must be evaluated for each image parts. If a multi processor computing environment (e.g. dual Pentium PC, or even computer cluster or grid) is available, the method can be implemented and used.
The general assumption of the SONG technique is to own an adequate initial neuron graph. A nice initialization could be the use of an obsolete topographic map, where the novel method would be responsible to check the old map (database) content to the new image information. In this meaning the method suits also to the map update procedures.
The road detection test lasted in Matlab (interpreter type) environment about two minutes on an Intel P4 1.7 GHz machine, all the other are significantly faster thanks to the compiler (MS Visual C++) realization. The image size was \(555\times\) 827 pixel. This fact also underlines the performance power of the method.
As the paper presented, the generalization of the original Kohonen-type SOM can be extended by graphs. The newly developed SONG method has been proved its capability in different photogrammetric and remote sensing tasks. The technique has shown how to cope with different types of tasks using the same algorithmic background. The method is important in the point of view of the artificial intelligence and neural networks, because the general suitability and applicability has also been proved.
## Acknowledgements
The author expresses his thanks to the Alexander von Humboldt Foundation, when the work was started; to the Institute of Photogrammetry and GeoInformation, Hannover; to the Hungarian Higher Education Research Program (FKFP) for partly financing the research.
## References
* [PERSON] (2003a) [PERSON], 2003a. Neural Self-Organization Using Graphs, _in: Machine Learning and Data Mining in Pattern Recognition, Lecture Notes in Computer Science,_ Vol. 2734, Springer Verlag, Heidelberg, pp. 343-352
* [PERSON] (2003b) [PERSON], 2003b. Graph Based Neural Self-Organization in Analyzing Remotely Sensed Images. _IEEE International Geoscience and Remote Sensing Symposium (IGARSS) 2003_, Toulouse, Vol. VI, pp. 3937-3939
* [PERSON] (2003c) [PERSON], 2003c. Neural Self-Organization in Processing High-Resolution Image Data, _ISPRS-Earsel Joint Workshop High Resolution Mapping from Space 2003_, Hannover, p. 6
* [PERSON] (2004) [PERSON], 2004. Generalization of topology preserving maps: A graph approach. _International Joint Conference on Neural Networks_, Budapest, _accepted for publication_
* [PERSON] et al. (1997) [PERSON] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 1997. Context-supported road extraction. _Automatic Extraction of Man-Made Objects from Aerial and Space Images (II) Monte Verita_, Birkhauser Verlag, Basel, pp. 299-308
* [PERSON] (1989) [PERSON], 1989. Neural network models for pattern recognition and associative memory. _Neural Network_, No. 2, pp. 243-257
* [PERSON] (eds): _Integrated Spatial Databases
- Digital Images and GIS_, Portland, Lecture Notes in Computer Sciences 1737, Springer, Berlin, pp. 20-33
* [PERSON] et al. (2001) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2001. Self-organised clustering for road extraction in classified imagery. _ISPRS Journal of Photogrammetry & Remote Sensing_, Vol. 55, No. 5-6, pp. 347-358
* [PERSON] and [PERSON] (1973) [PERSON], [PERSON], 1973. _Graph theory in modern engineering_, Academic Press, New York
* [PERSON] et al. (2001) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2001. Road extraction focussing on urban areas. in: [PERSON] et al. (eds): _Automatic Extraction of Man-Made Objects from Aerial and Space Images (III)_, Swets & Zeitlinger, Lisse, pp. 255-265
* [PERSON] (1972) [PERSON], 1972. Correlation matrix memories. _IEEE Transactions on Computers_, Vol. 21, pp. 353-359
* [PERSON] (1984) [PERSON], 1984. _Self-organization and associative memory_. [PERSON], Berlin
* [PERSON] (2001) [PERSON], 2001. _Self-organizing maps_. [PERSON], Berlin
* [PERSON] (1973) [PERSON], 1973. Self-organization of orientation sensitive cells in the striate cortex. _Kybernetik_, No. 14, pp. 85-100
* [PERSON] (2002) [PERSON], 2002. [[http://www.cs.ucf.edu/~reinhard/classes/cop3503/floyd.pdf](http://www.cs.ucf.edu/~reinhard/classes/cop3503/floyd.pdf)]([http://www.cs.ucf.edu/~reinhard/classes/cop3503/floyd.pdf](http://www.cs.ucf.edu/~reinhard/classes/cop3503/floyd.pdf))
* [PERSON] (1958) [PERSON], 1958. The perceptron. A probabilistic model for information storage and organization in the brain. _Psychological Review_, Vol. 65, pp. 386-408
* [PERSON] (1993) [PERSON], 1993. _Theorie der neuronalen Netze. Eine systematische Einfuhrung_. [PERSON], Berlin
* [PERSON] (eds): _Photogrammetric Week_, Wichmann, Heidelberg
* [PERSON] and [PERSON] (1997) [PERSON], [PERSON], 1997. Two-dimensional object matching using Kohonen maps. _IEEE International Conference on Systems, Man and Cybernetics_, Orlando, Vol. 1, Part 1/5, pp. 620-625
* [PERSON] and [PERSON] (1981) [PERSON], [PERSON], 1981. _Graphs, networks and algorithms_, Wiley, New York
* [PERSON] (1962) [PERSON], 1962. A theorem on Boolean matrices, _Journal of the ACM_, Vol. 9, No. 1, pp. 11-12
|
isprs
|
Lifelong 3D object recognition and grasp synthesis using dual memory recurrent self-organization networks
|
Krishnakumar Santhakumar, Hamidreza Kasaei
|
https://doi.org/10.1016/j.neunet.2022.02.027
| 2,022
|
CC-BY
|
isprs/c0f2b9b3_bd42_42c3_b0c0_3fa7aae1170a.md
|
Cost Effective Spherical Photogrammetry: A Novel Framework for the Smart Management of Complex Urban Environments
[PERSON] 1
[PERSON] 1
[PERSON] 1
[PERSON] 2
[PERSON] 3
[PERSON] 1
1 Dipartimento di Ingegneria Civile, Edile e Architettura (DICEEA), Facolta di Ingegneria, Universita Politecnica delle Marche, 60100 Anconna (AN), Italy - (s.chiappini)@pm.unipvm.it, [EMAIL_ADDRESS], (e.s.malinverni, r.pierdicca)@staff.unipvm.it
1 Dipartimento di Ingegneria Civile, Edile e Architettura (DICEEA), Facolta di Ingegneria, Universita Politecnica delle Marche, 60100 Anconna (AN), Italy - (s.chiappini)@staff.unipvm.it
1 Dipartimento di Ingegneria Civile, Edile e Architettura (DICEEA), Facolta di Ingegneria, Universita Politecnica delle Marche, 60100 Anconna (AN), Italy - (s.chiappini)@pm.unipvm.it
1 Dipartimento di Ingegneria Civile, Edile e Architettura (DICEEA), Facolta di Ingegneria, Universita Politecnica delle Marche, 60100 Anconna (AN), Italy - (s.chiappini)@pm.
works focus on its potential for Smart Cities. This technique provides a detailed visual representation of the surrounding environment, besides an objective and metric survey.
The project was driven by a joint venture between the academic world and a private Company. Specifically, the desired result was the creation of a management service to facilitate the detection of illegal advertising panels, verified through the integration between the 3D model and the cadastrial data. This integration of geo-referenced data into a cadastral register will lead to a whole series of innovative services aimed at ensuring transparency and traceability of tax collection and the fight against tax evasion.
The workflow can thus summarized as follows: after the collection of images through 360\({}^{\circ}\) cameras, the dataset is then processed getting a complete 3D reconstruction of the city. The 3D model has been validated with laser scanner data and georeferenced with GNSS positioning. The computation yields good accuracy, in line with the tolerance allowed. Finally, a WEBGIS has been developed, which exploits WebGL interface to interact with the 3D model and a GeOB to manage cadastral and fee information in an all in one solution.
Below is shown the synthetized workflow (Figure 1).
## 2 Related Work
The world population has increased significantly in recent decades, as its longevity. It is expected that about 70% of citizens will live in urban areas by 2050. This quick increase will cause unavoidable displacement of people from rural areas to more urbanized areas1. The aggregation of new citizens in a disorderly way, accompanied by new needs of residents and different standards of living, represent new challenges for governments. They will have to deal with issues related to waste disposal, scarcity of financial resources, increasing air pollution and associated risks to human health, increased traffic and maintenance of infrastructure to avoid degradation and wear ([PERSON] et al., 2016). Local administrators will be expected to make ever right decisions in support of the welfare of citizens. To support these choices, it is appropriate to think of a new city model, the so-called Smart City: an infrastructure that allows the use of Information and Communication Technologies (ICT) such as Internet of Things (IoT), Big Data, Cloud Computing \"Mobile broadband\" and \"Short range wireless\" ([PERSON] et al., 2017) applied to urban management. A Smart City is a city that employs an intelligent strategy of systems oriented to the management of complex information, mainly collected in real time ([PERSON] et al., 2018).
Footnote 1: [[https://www.unfpa.org](https://www.unfpa.org)]([https://www.unfpa.org](https://www.unfpa.org))
In addition, its development requires a common platform that provides the visualization of heterogeneous information and sensor network sources. A key role is played by Geographic Information Systems (GIS), which have an effective application for the connection and interoperability of data systems. It also provides an excellent basis for: efficient urban and spatial planning, spatial analysis, expedited decision making, implementation and verification of the maintenance and monitoring plan. ([PERSON] et al., 2015).
In this context, visualisation technologies are varied, going from simple to advanced. The data that can be included are high resolution photogrammetric images, cadastral data, urban planning data, municipal area photos, building addresses, 3D data, service infrastructure and all kinds of corporate data related to the work of municipal services. Therefore, the management and interface the static and dynamic data in 3D fashion environment goes under the umbrella of CityGML ([PERSON] et al., 2020) which exploits open systems and international standards such as those provided by Open Geospatial Consortium ([[https://www.ogc.org/](https://www.ogc.org/)]([https://www.ogc.org/](https://www.ogc.org/))).
An excellent way to obtain the data is given by the outdoor panoramic photography, applied not only for commercial purposes, but also for the three-dimensionality of the metric documentation especially in the fields of architecture and archaeology ([PERSON], 2012).
The advantages in the use of SP are the high resolution of the images obtained, the FOV up to 360\({}^{\circ}\), the low cost, the completeness of the information and the speed of acquisition. The goal of SP is to allow anyone to make documentation in any location, both indoor and outdoor, with simple and cheap devices. Therefore, currently, the most common practice for creating low cost cylindrical or spherical panoramic is generally relied on the collection of linear arrays and rotating camera, with a very high metric performance, on a panoramic head. ([PERSON], 2004). In addition, in order to provide 360\({}^{\circ}\) coverage with accurate 3D measurement capabilities, using panoramic cameras, strict sensor and system calibration procedures have been implemented to reconstruct a highly detailed and complete 3D urban model of the road environment. ([PERSON] ad al., 2018).
In this regard, specific research has recently been carried out. In the field of Heritage Cultural, the development of a 3D-GIS of the urban environment and the possibility of interacting with a huge amount of semantic information contained in the 3D geospatial model of the city ([PERSON] et al., 2016). These platforms are able to view, store, analyze and share 3D data to improve decision making, planning or problem solving. This methodology of work can also be applied to the management of infrastructure networks, to perform graphical analysis and interactive data visualizations, to simulate real working environments, even remotely. ([PERSON] et al., 2014).
The boost to provide data to all users and to create cities suitable to the needs of citizens, led the European Community to launch the : SCOPE project, \"Interoperable Smart City services through an Open Platform for urban Ecosystems\". Its aim is to develop an open platform based on a 3D model of the urban territory (3D CityGML Model) able to provide innovative and intelligent services for Smart Cities through the precise modelling of urban components, an integrated and interconnected set of information flows related to the urban context. ([PERSON] et al., 2013). A similar case was performed in the province of Trento, with the integration of the 3D platform i-Scope to the cadastre ([PERSON] et al., 2014). In doing so, various types of data have been integrated, ensuring the interoperability between taxes and information to each three-dimensional object represented, from different sources.
The work presented in ([PERSON] et al., 2014) is the one closest to the research here presented. It explores the potential of immersive videography in photogrammetry from multiple cameras, with requirement and specifications, allowing the user to take measurements on advertising panels from panoramic images. In
Figure 1: Workflow to display a point cloud in web tool from Spherical Photogrammetry
([PERSON], et al. 2019) a framework for illegal billboard advertising detection is proposed, based on machine learning techniques.
## 3 Case of Study
As pilot case to demonstrate the efficiency of the proposed workflow, it is tackled the issue of advertising panels. The illegal appropriation of such advertising posters is increasingly pressing local authorities, because of both the lack of tax collection and of urban decomorn. Their management and monitoring are either handled by the municipalities themselves or contracted out to external collection agencies. The Company financing the research project, Andreani Tribut sit, takes care the collection of local tax on advertising in the City of Brescia (Italy). The traits chosen in this analysis are part of an area that has already been manually detected by the Company's operators. This choice is due to study the benefits and efficiency of working remotely. The two roads section, Via San Polo and Via SanTufetania, have both billboards and shop signs. In Figure 2 is showed the road in Via San Polo, with the trajectory carry out by user and some image extracted from immersive video.
## 4 Acquisition and Methodology
In this section the methodology of data collection is described. More specifically, the SP acquisition and processing is compared with ground truth data (from laser scanning point cloud), in order to validate the methodology. A brief description of the existing data to be integrated in the framework is provided as well.
### Data acquisition
The experiment was conducted using the Nikon Key Mission 360 deg camera (Figure 2(a)) and extracting the spherical images from video. The videos were performed in different ways according to the road sections and their relative metric length: in the first case, in Via San Polo, with an operator walking along the road; in the second case, in which the operator, placing the camera at the end of an extensible pole, shoot a video proceeding on a vehicle along the route of interest. The use of video shooting results in a redundant high frame image and a loss of accuracy of the photogrammetric orientation. This leads to the search of a methodology that optimizes acquisition and return with the lowest projection error. It was decided to use a GNSS receiver to survey, on the ground, coordinates of clearly visible points such as the edges of machiculations on the road surface or special signs of road markings. The targets placed on the ground have been acquired by the GPS HiPer HR2 (Figure 2(b)) with RTK (Real Time Kinematic) method. On top of GNSS satellite signals an RTK receiver takes in an RTCM correction stream and then calculates your location with 1 cm accuracy in real time. The rate varies between receivers (Base and Rover) but most will output a solution at least once per second. This step allows to obtain a 3D point cloud with high precision. This manual procedure was very time-consuming due to the continuous stop and go. Every 300 meters, the operator not only surveyed the point, but also took a picture of the surrounding environment in order to avoid errors and doubts in the positioning of the marker during the processing phase. This happened because the panoramic camera is not inside equipped with GNSS receiver. On top of GNSS satellite signals an RTK receiver takes in an RTCM correction stream and then calculates its location with 1 cm accuracy in real time. The rate varies between receivers (Base and Rover) but most of them will output a solution at least once per second. Parallel to the video shooting, a survey was carried out with the Kaara Stencil 2 laser scanner (Figure 2(c)) on the same road sections in the same way as described above. Its contribution has been chosen to obtain a point cloud model as ground truth. Indeed, in this paper has been described the method for evaluating the point clouds obtained from panoramic images and the resulting differences with reference model from Kaara Stencil 2 as described in paragraph 5.1.
Footnote 2: [[https://www.topconpositioning.com/it/gnss-and-network](https://www.topconpositioning.com/it/gnss-and-network)]([https://www.topconpositioning.com/it/gnss-and-network](https://www.topconpositioning.com/it/gnss-and-network))
solutions/riccivitori-gnss-integrati/hiper-hr
### Data processing
The dense point cloud was processed using a software based on Structure from Motion algorithms by inserting spherical images. The software allows the combination of close-range images with the identification of homologous points supporting the automatic calibration of the camera, allowing the internal and external orientation.
The first operation in data processing was the extraction of equircetangular frames from panoramic video.
At this point it should be considered the number of frames to use for the image alignment. In the literature it is recommended to choose a minimum number of frames that is a function of the video capture rate ([PERSON] et al 2014). In addition, it should be considered also the speed of the vehicle that in the case study was 50 km/h which is converted in 13.89 m/s. This means that during every second of driving at least 14 panoramic photos should be taken (Figure 3). Even though the baseline value is very short and in theory guarantees the overlap with a smaller number of images than calculated, this was not possible. Due to the low image
Figure 3: The instruments used in this survey. (a) Camera Nikon Key Mission 360. (b) Hiper Hr receiver by Topcon. (c) Laser Scanner Kaarta Stencil 2.
Figure 2: The route portion considered for the experiments (Via San Polo, Brescia) and panoramic images. (a) Trajectory estimated from immersive video (red line). (b) Two frame extract from immersive video along the trajectory
resolution and the incident light during the survey, it was necessary to work with an equal or even higher number of frames than the above calculation in order to correctly orient the cameras with the rest of the model. This caused major slowdowns with the alignment process. The manual frame extraction operation is a fundamental and at the same time complex phase, because the user have to divide the video into several ranges, according to the speed of the vehicle.
At the end of this processing above mentioned, the frames have been imported and placing the Ground Control Points. These markers, manually inserted by the operator in as many images as possible, have the task: to guarantee an absolute orientation of the cameras, ensure a high number of homologous points between the different panoramas and georeferencing the model. Finally, the dense clouds of the sections under investigation have been processed. Several tests were carried out to try to reduce and minimize significantly the alignment and processing times: since this is a phase in which the software performs photogrammetric processes, it was decided to apply masks mainly to the sky and the silhouette of the vehicle in all the images processed.
Below a view of dense clouds in Figures 5, 6 and 7, instead the Tables 1,2 contain the data obtained from processing.
### Kaarta Stencil 2
Kaarta Stencil 2 (Figure 1c) is a stand-alone, light weight SLAM instrument, with an integrated system of mapping and real-time position estimation. In addition, it is a hand-held of limited size, which allows quick and easy 3D mapping by hand made. This choice was made to survey the environment around the road. As for the photogrammetric survey, the road was run both on foot and by car. The laser scanner has been assembled on a small pole held by hand. KAARTA Stencil 2 depends on LiDAR and IMU data for localization. The system uses Velodyne VLP-1e connected to a low-cost MEMS IMU and a processing computer for real-time mapping. VLP-16 has a 360\" field of view with a 30\" azimuthal opening with a band of 16 scan lines. The data acquisitions were captured using the KAARTA Stencil 2 default configuration parameters, set in order to use the instrument in structured outdoor environments. Specifically, these settings include default values (Table 3) the voxelSize, namely the resolution of the point cloud in map file, cornerVoxelSize, surfVoxelSize, sorroundVoxelSize, indicate the resolution of the point cloud for scan matching and display, and blindRadius, that is the minimum distance of the points to be used for the mapping.
The device can be connected to other sensors, such as a GNSS receiver. In this case study, it was connected to an external monitor with a wired USB connection. On the external monitor you will be able to watch and save the trajectory performed by the track camera integrated in the SLAM device.
In recent decades, the entry into the market of the use of SLAM technology has been the subject of research in the geometric field. Several publications deal with the Integrated use of close-range photogrammetry, terrestrial laser scanning (TLS) and Kaarta Stencil 2 ([PERSON] et al 2017, [PERSON] et al 2018).
\begin{table}
\begin{tabular}{c c c c} \hline \hline
**Images** & **Marker** & \
\begin{tabular}{c} **N. dense** \\ **point** \\ \end{tabular} &
\begin{tabular}{c} **GCP** \\ **RMSE [m]** \\ \end{tabular} \\ \hline
647 & 51 & 81.097.246 & 0.67 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Characteristics of the obtained point cloud of Via San Polo
Figure 5: The reference dense cloud point of Via San Polo
Figure 6: The reference dense cloud point of Via Sant’ Eufémia
Figure 7: Some sample of advertising panels visible in the point cloud
\begin{table}
\begin{tabular}{c c c} \hline \hline
**Images** & **Marker** & \
\begin{tabular}{c} **N. dense** \\ **point** \\ \end{tabular} &
\begin{tabular}{c} **GCP** \\ **RMSE [m]** \\ \end{tabular} \\ \hline
8782 & 45 & 288.292.744 & 0.34 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Characteristics of the obtained point cloud of Via Sant’ Eufémia
### Existing information
As stated in the introduction section, the management of urban information is made manually, with data acquisition campaigns made by expert operators. This task is performed using an ESRI ArcPad mobile application. In the office the basic data package is prepared by ArcMap and loaded on the pads before going out for the survey. The supporting data used within the application are cadastr shapefile and urban mapping which in turn are related to the particle owners. The exported data are in general with WGS-84 reference system. The data are stored in a Personal Geo-Database (Microsoft Access file). On returning to the office, the survey data are downloaded, Excel files are exported by type of tax and stored in a platform. The data acquired during the survey are:
-type of tax to be ascertained for each cadastral parcel (ICP posters/advertising, TOSAP driveways, PGIP implant plan) referred to each point registered;
-Photographic image referred to the point collected, as mentioned above.
In Figure 8 is represented the interface of the current software of the firm, during the survey in the field.
## 5 Results
### Comparison of the results
At the end of the acquisition phase with KAARTA Stencil 2 the information about the configuration setting, the estimated trajectory (Table 4) and the 3D point clouds (Figures 9, 10), are stored in a folder created automatically by the mobile laser scanner proposer every operation of survey.
If the elaboration of 3D point cloud is made by mobile laser scanner, the post-processing phase deserve some explanations. Using the freeware version of CloudCompare3 (Open Source Project), it is possible to evaluate the remoteness of the model in comparison with a reference model. In this program, it is compared the data taken from spherical camera and Lidar.
The 3D data, saved in ily (Polygon File Format) format, can be opened in CloudCompare that allows these main functions: registration and alignment of point cloud, manual or automatic cleaning and segmentation of area study. By adopting the vercirality of the scalar field, the software calculates the normal along the z axis. This operation ensures the elimination of curved objects or noise, returning the vertical surfaces flat and well defined.
The Kaarta Stencil 2 is not integrated with a GNSS system, which would otherwise ensure a georeferencing of the output database. Therefore the user must manually position at least three targets to ensure its correct positioning in space.
The unique method is to select control points on the horizontal road, because its reflectance makes it clearly visible and recognizable to the human eye. So a first solution was to use the Control Points taken by the GNSS survey. In the end, the cloud was segmented by the user with a 2D polygon on the object, in order to obtain a 3D model closest than the outcome obtained by photogrammetry. With this procedure only the points inside the polygon are kept.
After that, the point clouds are imported into CloudCompare. to evaluate the distance between them, using the C2C tool. In order to make the comparison, it has been set the point cloud by Kaarta Stencil 2 as the reference cloud to be compared with the cloud obtained by photogrammetry. To validate the result, at the end of the processing, a Gaussian curve has been exported, with a max
\begin{table}
\begin{tabular}{p{42.7 pt} p{42.7 pt} p{42.7 pt} p{42.7 pt} p{42.7 pt}} \hline \hline
**Locati** & **Acquisition** & \multicolumn{2}{c}{**Trajectory**} & \multicolumn{1}{p{42.7 pt}}{**Point Cloud**} \\ \cline{3-5}
**on** & **time [s]** & \begin{tabular}{c} **N\({}_{\text{t}}\)** \\ **points** \\ \end{tabular} & \
\begin{tabular}{c} **Length** \\ **Im]** \\ \end{tabular} &
\begin{tabular}{c} **N\({}_{\text{t}}\)** \\ **N\({}_{\text{t}}\)** \\ \end{tabular} \\ \hline Via San Polo & 1525 & 2682 & 1058 & 174.663.563 \\ Via Sant’ Eufem & 566 & 2682 & 922 & 65.356.512 \\ ia & & & & \\ \hline \end{tabular}
\end{table}
Table 4: Characteristics of the real time solutions of Kaarta Stencil 2
Figure 8: Manual survey performed by user based on cadastral platform
Figure 10: 3D point cloud by Kaarta Stencil 2 obtained surveying Via Sant’Eufem
Figure 9: 3D point cloud by Kaarta Stencil 2 obtained surveying Via San Polo
### Data management framework
Albeit the results of the previous steps are effective, the lack of interoperability about data processed and existing repositories hamper their real exploitation. To overcome such limitation, an integration among the system has been developed. The WEBGIS interface offers various spatial and topographic interaction tools, such as distance, area, volume, angles and coordinates. In this platform it is possible to explore the point cloud thanks to the pan, tilt and zoom function. A useful feature is the annotation command that allows the selection of different levels of point cloud data. Finally, the link command is provided to access the GIS platform to support the user to query additional data. The file is geo-referenced and connected to a web server such as PostgreSQL/PostGIS, the data collected in this way is aggregated and periodically exported, reprocessed and make available through the open source Web-GIS platform as Lizmap. The architecture of data management is depicted in Figure 18.
## 6 Conclusion
This work stands as a guide for the management of public assets specifically for tax collection; it is a model that integrates different systems such as SP, information technology and data communication to meet the needs of local authorities. Moreover, these features are the basis for the creation and operation of the Smart City. It can be highlighted how the development of this new methodology can bring benefits and have a positive impact: in terms of time, costs for the public finance, for the speed of data acquisition on the territories examined and especially for the small number of human resources employed in the work.
The framework described in this article are promising. The Nikon Mission Key 360 has proved to be suitable for metric reconstruction even though it is an inexpensive tool; the precision values and the distance between clouds described in paragraph 5.1 can be compared with a three-dimensional model obtained with the Mobile mapping system technology. There isn't any doubt that this system is a good way to acquire large amounts of data without losing accuracy. The visualization of large point cloud data in web viewers such as Potree, completely replaces the operations that were performed on site, as described in paragraph 5.2. The goal is to create an atlas in the tax management of the cities in which the Company gets the assignment.
Regardless of the quality of the point cloud, this research opens the way to the use of fast, easy and universal tools because even a not highly qualified staff can easily and rigorously survey entire neighborhoods and urban environments. The aim is to document municipal areas and query three-dimensional models to answer at citizens' needs.
## References
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON], [PERSON]. (2014) GIS Applications for Building 3D Campus, Utilities and Implementation Mapping Aspects for University Planning Purposes. Journal of Civil Engineering and Architecture. 8. pp. 19-28. External Links: Document, ISSN 0021-4093 Cited by: SS1.
* [PERSON] et al. (2018) Da, on a Novel 360\({}^{\circ}\) Panoramic Stereo Mobile Mapping System. Photogrammetric Engineering and Remote Sensing84 (6), pp. 347-356. External Links: Document, ISSN 0021-4093 Cited by: SS1.
* A Review of Developments and Future Opportunities. 10.14236/ewic/EVA2017.2. External Links: Document, ISSN 0021-4093 Cited by: SS1.
* [PERSON] et al. (2019) An Integrated Approach to 3D Web Visualization of Cultural Heritage Heterogeneous Datasets. Remote Sensing11, pp. 2508. External Links: Document, ISSN 0021-4093 Cited by: SS1.
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON], [PERSON] City GML Model: smart cities e catasto 3D. Atti 17a Conferenza Nazionale ASITA, pp. 275-280. External Links: Document, ISSN 978-888 Cited by: SS1.
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON]. (2019) New Perspectives on the Sanctuary of Accelquis in Nora (Sardinia): From Photogrammetry to Visualizing and Querying Tools. Open Archaeology5, pp. 263-273. External Links: Document, ISSN 0021-4093 Cited by: SS1.
* [PERSON] et al. (2019) Integrated management and visualization of static and dynamic properties of semantic 3d city models. International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences. External Links: Document, ISSN 0021-4093 Cited by: SS1.
* [PERSON] et al. (2012) La fotogrammetria sferica un nuova tecnica per il rilevo dei vicini. Archaeomatica. Cited by: SS1.
* [PERSON] (2019) Alemo-before and after. International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences. External Links: Document, ISSN 0021-4093 Cited by: SS1.
* [PERSON] et al. (2019) Virtual tours for Smart Cities: a comparative photogrammetric approach for locating hot-spots in spherical panoramas.
Figure 17: Display of georeferenced spherical image using Photo Sphere Javascript library
Figure 18: Spatial Information System InfrastructureInternational Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences.
* Goncalves & Almeida (2016) [PERSON]. (2016). 3D-GIS HERITAGE CITY MODEL: Case study of the Historical City of Leiria.
* [PERSON] & [PERSON] (2016) [PERSON], [PERSON] & [PERSON], [PERSON], [PERSON] & [PERSON], (2016). Developing Smart Cities: An Integrated Framework. Procedia Computer Science. 93. 902-909. 10.1016j.procs.2016.07.258.
* [PERSON] & [PERSON] (2016) [PERSON], [PERSON], [PERSON]. (2016). The significance of digital data systems for smart city policy. Socio-Economic Planning Sciences. 10.1016j.seps.2016.10.001.
* [PERSON] & [PERSON] (2014) [PERSON], [PERSON], [PERSON]. (2014). Photogrammetric Applications of Immersive Video Cameras. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences. II-5. 211-218. 10.5194/isprsanals-II-5-211-2014.
* [PERSON] & [PERSON] (2016) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] & [PERSON], [PERSON] & [PERSON], [PERSON] & [PERSON], [PERSON] & [PERSON], [PERSON] & [PERSON], [PERSON] & [PERSON], [PERSON] & [PERSON], [PERSON] & [PERSON], [PERSON] & [PERSON], [PERSON] & [PERSON], [PERSON], [PERSON]. (2017). Comparison of the Selected State-Of-The-Art 3D Indoor Scanning and Point Cloud Generation Methods. Remote Sensing. 9. 796. 10.3390/rs9080796.
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON]. (2015). XEarth: A 3D GIS platform for managing massive city information. 1-6. 10.1109/CVEMSA.2015.7158625.
* [PERSON] & [PERSON] (2019) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON]. (2019). An Illegal Billboard Advertisement Detection Framework Based on Machine Learning. ICBDT2019: Proceedings of the 2 nd International Conference on Big Data Technologies. 159-164. 10.1145/3358528.3358549.
* [PERSON] & Tecklenburg (2004) [PERSON], [PERSON]. (2004). 3-D object reconstruction from multiple-station panorama imagery. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences. 34.
* [PERSON] & [PERSON] (2015) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON]. (2015). Applications of big data to smart cities. Journal of Internet Services and Applications. 6. 10.1186/s13174-015-0041-5.
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON]. (2018). Public Management Focused to the Smart City. International Journal of Advanced Engineering Research and Science. 5. 181-187. 10.22161/ijaers.5.4.27.n Policy & Practice, 10(2-3), 146-155.
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON]. (2014). Services Oriented Smart City Platform Based On 3d City Model Visualization. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences. II-4. 10.5194/isprsanals-II-4-59-2014.
* [PERSON] (1993) [PERSON], (1974/1993) An Introduction to Town and Country Planning, UCL Press.
* [PERSON] (2002) [PERSON], and [PERSON]. Knowledge management in education: Enhancing learning & education. Psychology Press, 2002.
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], & [PERSON] (2020). Advances in smart roads for future smart cities. Proceedings of the Royal Society A, 476(2233), 20190439.
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON]. (2018). Examination of Indoor Mobile Mapping Systems in a Diversified Internal/External Test Field. Applied Sciences. 8. 401. 10.3390/app8030401.
|
isprs
|
COST EFFECTIVE SPHERICAL PHOTOGRAMMETRY: A NOVEL FRAMEWORK FOR THE SMART MANAGEMENT OF COMPLEX URBAN ENVIRONMENTS
|
S. Chiappini, A. Fini, E. S. Malinverni, E. Frontoni, G. Racioppi, R. Pierdicca
|
https://doi.org/10.5194/isprs-archives-xliii-b4-2020-441-2020
| 2,020
|
CC-BY
|
isprs/140cdedc_da72_45a7_af06_c65b0a2c274f.md
|
# Digital Modelling and Analysis of Masonry Vaults
[PERSON], [PERSON], [PERSON]
1 Dept. of Civil and Environmental Engineering, Politecnico di Milano, Piazza Leonardo da Vinci, Milan, Italy - (grigor.angeliu, giuliana.cardani, dario.coronelli)@polimi.it
###### Abstract
The focus of this paper is to discuss the peculiarities of the digital modelling of masonry cross vaults. The proposed methodological approach for a multidisciplinary study of masonry vaults will be focused on: geometrical survey, crack pattern, geometrical modelling, and safety assessment based on structural analysis.
Automatic reconstruction procedures recently proposed in the literature for ribbed masonry vaults, are used to overcome time-consuming modelling tasks. The created digital models take into consideration the different aspects of the three-dimensional geometry, the internal divisions with variations of material properties in different parts, and a suitable discretization for finite element analysis. Moreover, the same model can be used in BIM (Building Information Modelling), judged as a suitable environment in which to combine different aspects of the restoration works such as: documentation, intervention design, and the data system in an unique model. The same model is then used to perform finite element analysis. Each of these aspects is clarified with examples of different vaults typology coming from case studies as that of the church of St. Bassiano in Pizzighettone (Cremona) or the notable Milan Cathedral.
Masonry, Cross-vault, Point-cloud, Building Information Modelling, Finite Element Analysis, Structural assessment. +
Footnote †: This contribution has been peer-reviewed.
## 1 Introduction
Nowadays, digital models are a common requirement in restoration projects, which in order to be an integral part in the design of a conservation project need to include many aspects of the object under study: the actual three-dimensional configuration, the design of technical interventions, the design of the monitoring system, the structural assessment, the planning of the maintenance, etc.
The technical difficulties in creating digital models are particularly elevated in valued masonry buildings due to the need to model complex geometry, its construction technique, materials, and different reconstructions over time. Moreover, different typologies are found in practice, i.e. quadripartite, separrite, and with very different shapes in plan: triangular, trapezoidal, square, rectangular depending on the epochs in which they were constructed and their function in the given building ([PERSON], 1972; [PERSON], 1981). Digital models of masonry vaults must include the main members: tas-de-charge, arch, rib, web, crown and rubble-fill.
Point cloud data (derived from photogrammetric or laser scanning) provide important information on the metrics required for geometric modelling, but also for the documentation of crack pattern. Nevertheless, the point cloud data cannot be used in a straightforward manner to obtain the masonry vault digital model, as the thickening of the point cloud surface in the normal direction (usually the intrados) does not result in the real geometry, as self-intersection of solids appearing in corner regions near the ribs and arches.
Accurate models of the masonry vault should include these known details with solid elements without tending to reduce a complex three-dimensional system with two dimensional elements (e.g. as the common shell modelling approach). Such models can be then suitable for multipurpose use: documentation, design of technical interventions, or structural assessment of masonry cross vaults.
Digital models must be suitable for use in different software, deepening the interoperability issue, in order to meet the needs of different users working in the AEC Industry e.g. CAD, BIM or FEM ([PERSON] and [PERSON], 2013). In particular BIM (Building Information Modelling) is judged as a suitable platform to combine different aspects of the restoration works such as: documentation, intervention design, and the data control system in a unique model ([PERSON] et al., 2018; [PERSON] et al., 2013; [PERSON] and [PERSON], 2016; [PERSON] et al., 2018; [PERSON] et al., 2014).
In this paper a particular focus will be directed on the integrated use of geometric survey, geometric modelling for documentation and structural analysis of masonry cross vaults. The procedure is illustrated by taking as case study the vaults in the nave of the church of St. Bassiano in Pizzighettone (Cremona) and in Milan catherdal. Geometric data required for modelling are extracted from photogrammetric point cloud survey of each case study. The digital models of the masonry vaults are created using an automatic reconstruction procedure proposed recently in the literature for ribbed masonry vaults ([PERSON] et al., 2019a). It successfully overcomes time-consuming modelling tasks as well as interoperability issue between geometric modelling (CAD or BIM) and structural analysis software. An important aspect discussed is the combination of in situ observations and modelling. Finally, the aspect of structural assessment and interpretation of damage observations is discussed about the church of St. Bassiano, where a diffused system of cracks is visible in the vaulting system.
## 2 Issues on Digital Survey
Metric survey of cross vaults is difficult to carry out with simple tools, hence more sophisticated ones such as photogrammetry or laser scanning are judged as more appropriate.
Two case studies taken into consideration throughout this paper analyse the vaults in the naive of Milan Cathedral and the church of St. Bassiano (Figure 1a, b). They are of the same period and typology, but with different proportions. In Milan Cathedral the naive vaults are quadripartite with dimensions 9.9 m x 19.2 m ([PERSON] et al., 2015), while in the church of St. Bassiano we have the quadripartite vaults of smaller dimensions, 4.4 m x 7.1 m ([PERSON] and [PERSON], 2017).
The geometric survey was carried out with digital photogrammetry. A part of the point cloud considering only the vaults in the naive is shown in Figure 2. The point cloud should be detailed enough to capture geometric details of the surveyed ribbed vault, that are relevant for a structural study. The present point cloud describes the visible surface of the vault (intrados) and contain around 5000 - 10000 point per vault. In most cases, measurements include only the intrados, due to the difficulty to physically access the exrados. In other cases where accessible there is extra work to be done (as well as extra costs) in order to connect the two sets of measurements.
Point cloud data (derived from photogrammetric, laser scanning or lidar sensors) provide important information on the metrics required for geometric modelling. Moreover, the fact that the point cloud is denser in regions where geometric features (e.g. arch, rib) are to be extracted, is an advantage and increases the accuracy of computations.
The identification of geometric shapes, is carried out by fitting the equation of a circular or elliptic arc in a subspace of points describing the object of interest (e.g. arch, rib, web, etc.) ([PERSON] et al., 2019b). From a mathematical point of view the number of the points in the selected subset is usually more than 3, hence we expect an overdetermined system of linear equations:
\[\mathbf{Cu}=\mathbf{r} \tag{1}\]
where:
C - coefficient matrix
\(\mathbf{u}\) - vector of unknowns
\(\mathbf{r}\) - residual
The system is solved for u to minimize \(\left\|\mathbf{r}\right\|\):
\[\left\|\mathbf{Cu}\right\|=min\text{, }subject\text{ to }\left\|\mathbf{u} \right\|=1 \tag{2}\]
The system can be solved iteratively with the Gauss-Newton method. It is now possible to identify the geometric shape to be used for modelling the arch. An example considering the identification of the geometric shape (circular or elliptic), radii, centre point by using the least square minimisation procedure is shown in Figure 4.
In a more general case, e.g. a multi-centred arch, the application of piecewise approximation is required. Although slightly more elaborate, it remains in principle the same, by applying the least squares problem.
Within a structural context, more information is necessary regarding the internal construction of masonry vaults, which unfortunately cannot be provided by point cloud data.
Figure 1: View of masonry vaults in the central nave: a) Church of St. Bassiano, b) Milan Cathedral Figure 3: Milan Cathedral: Point cloud of the vaults in the nave
Figure 2: St. Bassiano Church: Point cloud of vault in the naveTherefore, the correlation of the digital survey (described up to this point) and direct in-situ observations (constructive details, non-destructive investigations) within the same model becomes important. In other cases, historic data (literature and archival data, technical reports) can be used to describe the construction of a known vault typology.
The detail of the in-situ inspection in the vaults of the nave of the church of St. Bassiano are shown in Figure 5. The construction details of the vaults are visible only from the extrados (in Figure 5a, b) or after the removal of plaster (Figure 5c). The ribs are made of brick and covered with a thick layer of plaster (Figure 5c). The arch in the arcade is observed to be made of bricks, oriented radially, while the web is made of masonry, with the brick courses oriented parallel to the main direction of the vault arches (Figure 5a, b).
In difference to that in Milan Cathedral, the pointed arches are made of stone vousousings and the cross-section height is 0.57 m, while for the rib the cross-section height is 0.52 m. The web is made of brick masonry, 0.38m thick. Hence the role of ribs is imported in the structural response of the system as it will be shown later on in this article.
## 3 Digital Modelling
As argued previously in the introduction, the point cloud data cannot be use in a straightforward manner to obtain the masonry vault geometric model, in order to be used as a correct representation of the original element (which is made of different parts) also for structural analysis applications. Moreover, there is the possibility to obtain detailed three-dimensional models, with the accuracy tailored to the required objectives of the analysis, thanks to the high level of digital technology available. Within this context, modelling the cross vaults will be investigated in:
- BIM environment
- FEM environment
The Autodesk Revit software is adopted to investigate the BIM environment, while Simulia Abaqus software is chosen to set up the structural modelling with the finite element method. Compared to a traditional CAD model, BIM models have more similar geometry modelling requirements with FEM models e.g. intersection and overlapping of parts (components) are not allowed. Therefore, it creates the possibility to use the same geometric model for documentation and structural analysis.
Automatic reconstruction procedures recently proposed in the literature for ribbed masonry vaults based on parametric modelling, are used to model the two vaults object of this study ([PERSON] et al., 2019). The generated models for the two case studies includes all the typical members of a ribbed masonry vault created with solid elements: arch, rib, web, rubble-fill, and nodal zones (tas-de-charge) ([PERSON] et al., 2019). It can be used to create cross vaults models in BIM and FEM.
In Revit the created model is imported as a Revit Family Component, through an ACIS file (SAT file extension). The created family can be then loaded within a specific Revit Template (Architecture, Construction, Structural, etc.). An example of creating a family component of the masonry vault in the church of St. Bassiano and using it within a BIM construction template is shown in Figure 6. The same model can then also be used also within a structural analysis software. Figure 7 shows the internal parts of the digital model of the ribbed vaults in the naive of Milan Cathedral imported in Abaqus software and meshed with tetrahedral elements. The model includes the stone skeleton which comprises the arches, ribs, masonry web which typically lies over the arch and rib skeleton and the rubble-infill. Figure 8 shows the second model (cross vault in the naive of the church of St. Bassiano) after the discretization with hexeadral finite elements in Abaqus.
Finally, the created models through the adopted procedure based on parametric modelling take into consideration the complex three-dimensional geometry with the internal divisions between structural parts, and an optimal discretization for finite element structural simulations with hexahedral or tetrahedral elements.
## 4 Observed Damage and Structural Assessment of Masonry Vaults
Typical damage observed in the masonry cross-vaults usually arouses concerns about the safety of the structural system. Interpretation based only on personal experience must be avoided, when taking into consideration the available level of technology and data widely available and due to the very different masonry vaults typologies present in historical constructions.
In order to develop more effective and minimal repair measures, [PERSON] (1993) discussed on the need to understand the crack pattern observed in historical buildings by proposing a comparison with numerical simulation. Further understanding of the structural behaviour could be developed through experimental investigating. [PERSON] et al. (2019) concluded on the important cyclic vertical support settlements on the structural response of cross vaults.
The intensity of the observed damage in the two masonry vaults here analysed are different. In the vault of the central nave of Milan Cathedral no serious damage is observed, with exception of particular areas which are characterised by specific structural configuration (e.g. Tiburio or apes) and are not the object of the present study. The remaining part of visible damage is related to humidity rather than structural actions. In difference to that, in the vault of the central nave in the church of St. Bassiano, a diffused crack pattern is observed (Figure 9), with a nearly continuous crack in the longitudinal direction. It starts close to the facade and continues in the nave until the apes ([PERSON] and [PERSON], 2017). Here, other cracks documented in the foundations (e.g. close to the apes), enforce the idea that soil settlements are an important possible cause. Therefore confirming the differential settlements hypothesis ([PERSON], 2018). Another reason for the present crack pattern is found in historical information, stating change of the load in the piers and cause of supplementary settlements due to the addition of the masonry vaults centuries after the church construction, as well as other smaller structures ([PERSON] and [PERSON], 2017) Structural assessment by simplified methods may fail to give an interpretation on the observed damage, due to an excessive level of the simplification of the reality. On the other hand, complex structural models, have the benefit of being able to predict most of the observed damage, hence supply an interpretation on the damage observation.
In the following two detailed finite element structural models of the two vaults are studied due to self-weight. Furthermore, the case of St. Bassiano is studied also under different possible support settlements. In both models, the iron tie rods are not inserted explicitly with the aim of studying the structure only under clear self-weight and settlement effects. Further details on the behaviour of structures with iron tie rods can be found on ([PERSON] et al., 2018; [PERSON] and [PERSON], 2019). In the simulations, the masonry web, the stone arch and the
Figure 8: Vaults in the nave of the church of St. Bassiano: Mesh with hexahedral elements for numerical simulations
Figure 7: Vault in Milan cathedral. Mesh with tetrahedral elements for numerical simulations
Figure 9: Documentation of the crack pattern in the nave of the church of St. Bassiano within BIM environment
rubble infill are modelled with a non-linear plastic damage constitutive model according to the description of ([PERSON] and [PERSON], 1998; [PERSON] et al., 1989).
The stone masonry (arch and rib in Milan Cathedral) is modelled with an elastic modulus, \(\mathrm{E}=8000\) Mpa, compressive strength, \(\mathrm{f_{c}=7}\) MPa and fracture energy in compression, \(\mathrm{G_{c}}\) =1.5. Nmm/mm\({}^{2}\), while the tensile branch with a strength \(\mathrm{f_{i}=0.35}\) MPa and a fracture energy \(\mathrm{Gft}=0.02\) Nmm/mm\({}^{2}\). The brick masonry (web of both case studies and arch and rib in St. Bassiano church) is modelled with an elastic modulus, \(\mathrm{E}=2000\) MPa, compressive strength, \(\mathrm{f_{c}=5}\) MPa and fracture energy in compression, \(\mathrm{G_{c}=1.2}\) Nmm/mm\({}^{3}\), while the tensile branch with a strength \(\mathrm{f_{i}=0.25}\) MPa and a fracture energy \(\mathrm{Gft}=0.012\) Nmm/mm\({}^{2}\). The rubble infill is modeled with a modulus of elasticity, \(\mathrm{E}=800\) MPa, compressive strength, \(\mathrm{f_{c}=2}\) MPa and fracture energy in compression, \(\mathrm{G_{c}=0.5}\) Nmm/mm\({}^{2}\), while the tensile branch with a strength \(\mathrm{f_{i}=0.15}\) MPa and a fracture energy \(\mathrm{G_{c}=0.01}\) Nmm/mm\({}^{2}\) (basically a material with scarce properties). The volumetric weights for each material are 22 kN/m\({}^{3}\), 18 kN/m\({}^{3}\) and 16 kN/m\({}^{2}\), respectively.
The proposed methodology is shown to be efficient in order to study the load flow in masonry cross vaults. In particular, the results demonstrate no development of damage under self-weight, hence leaves it to other causes.
In the case of the vault of Milan Cathedral the computed horizontal thrust is \(\mathrm{H}\)=268 kN, while the vertical force V=1726 kN for each support. In the case of the vault in the church of St. Bassiano the horizontal thrust is \(\mathrm{H}\)=52 kN, while the vertical reaction V=132 kN. In the latter, similar results were calculated also using the limit analysis solution in the masonry vaults in ([PERSON], 2018).
The investigation of the load flow through the principal compressive stresses shows a very regular flow of forces pointing at tas-de-charge in the case of St. Bassiano (Figure 10). Moreover, in the rib area there is not a concentration of stresses as usually found in other ribed masonry vaults ([PERSON] et al., 2019a). While diversely, in Milan Cathedral vault, the load flow shows a mixed tendency to be directed in part to the tas-de-charge and in part to the ribs and arches (Figure 11). The difference of the load flow in the two cases is related to the considerable difference of the rib section height (e.g. almost 5 times).
Other aspects to be investigated are the settlement effects in the observed crack pattern. The case of St. Bassiano is investigated under different support settlements which in order to understand the actual causes of the observed crack pattern:
a) Vertical settlement over one support
b) Vertical settlement applied in two supports at different stages of time.
c) Vertical settlement of all the supports on one side
d) Horizontal settlement of all the support on one side
Figure 11: Milan Cathedral: Vector plot of the compressive principal stresses due to self-weight (without settlement) Figure 12: Damage simulation due to single vertical support settlement: a) intrados view, b) extrados view
Figure 10: St. Bassiano Church: Vector plot of the compressive principal stresses due to self-weight (without settlement)
In all the analysis the considered entity of settlement is 1 cm, in order to investigate its effects on the produced crack patter in the vautilting system of St. Bassiano church. The analyses result first of all provide an interpretation of the causes of damage (Figure 12 - Figure 15). The results show that the documented crack pattern (Figure 9), is not a result of a single support settlement, but a combination of different factors.
An interesting fact that is noted is that the crack pattern should be much diffused also in the extrados, although unfortunately, no documentation is available in the present. The cracks simulated in the extrados find confirmation also in experimental studies of masonry cross vaults ([PERSON] et al., 2019).
## 5 Conclusions
The described methodological approach in this paper includes survey, documentation, digital modelling and structural assessment. It was shown to be necessary to consider different ingredients in the digital modelling process of masonry vaults such as: point cloud measurements, in-situ investigations or the need to model the geometry with solid elements.
The accurate geometric survey of the masonry cross vaults is important in understanding and modelling its structural configuration. The in-situ observation and diagnostic investigations help to better understand the constructive details and possible causes of the observed damage.
The masonry vaults models created here through parametric modelling, were successfully used for geometric documentation in BIM as well as for finite element structural analysis. The numerical simulations with solid elements developed here, approximating as much as possible the reality, help to understand the structural behaviour of complex masonry vaults, under self-weight conditions as well as under differential soil settlements.
The two analysed case studies clearly show how the technology of construction is reflected in the structural response: ribbed masonry vaults with stiff stone ribs in Milan cathedral and flexible brick ribs in the case of St. Bassiano church. The simulations clearly mark the advantage of developing complex models with solid elements. Compared to shell approach (diffused in the literature), models with solid elements permit to study zones of intersection between different structural elements as: in, arch, web, wall, infill. Furthermore, it is possible to investigate the stress transfer between different elements as well as the damage evolving within the element thickness. This modelling technique allows also to model and study the effectiveness of strengthening interventions, which in most of the cases are applied in the extrados (in order to be invisible or not to damage possible fresces in the intrados).
Taking advantage also of the automatic structural modelling procedure and the created detailed models, future developments could include the investigation of the structural response of different typologies of masonry vaults combining survey, damage observation and numerical modelling.
## Acknowledgements (optional)
The authors are grateful to the Veneranda Fabbrica del Duomo di Milano, for making it possible to work in the Cathedral of Milan. We also thank Don. [PERSON], Don. [PERSON] and Ing. [PERSON] for access, and support with data during the study of the Church of St. Bassiano in Pizzightetone (Cremona).
automated creation of accurate FEM meshes of heritage masonry walls from point cloud data, International Conference on Structural Analysis of Historical Constructions edition:11, Peru.
* [PERSON] and [PERSON] (2019) [PERSON], [PERSON], 2019. A Multidisciplinary Strategy for the Inspection of Historical Metallic Tie-Rods: The Milan Cathedral Case Study. International Journal of Architectural Heritage, 1-19.
* [PERSON] et al. (2013) [PERSON], [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], [PERSON], 2013. Combined Geometric and Thermal Analysis from UAV Platforms for Archaeological Heritage Documentation. ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences II-5/WI, 49-54.
* [PERSON] and [PERSON] (2016) [PERSON], [PERSON] [PERSON], 2016. Challenges from building information modeling to finite element analysis of existing buildings, 10 th International Conference on Structural Analysis of Historical Constructions. CRC Press, Leuven, Belgium, p. 120.
* [PERSON] and [PERSON] (2017) [PERSON], [PERSON], 2017. When the strengthening of historic masonry buildings should be carried out in different phases: the structural reinforcement and monitoring of the Lombard-Romanesque church of Saint Bassiano, in Pizzighettone (CR), Italy, PROHITEC'17-3 rd International Conference on Protection of Historical Constructions. IST Press, pp. 1-12.
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON], 2015. The Cathedral of Milan: the structural history of the load-bearing system. International Journal of Architectural Heritage.
* [PERSON] and [PERSON] (2013) [PERSON], [PERSON], 2013. BIM for cultural heritage. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 5, 225-229.
* [PERSON] (1981) [PERSON], 1981. The construction of Gothic cathedrals: a study of medieval vault rection. University of Chicago Press.
* [PERSON] and [PERSON] (1998) [PERSON], [PERSON], 1998. Plastic-damage model for cyclic loading of concrete structures. J Eng Mech-Asec 124, 892-900.
* [PERSON] et al. (1989) [PERSON] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 1989. A Plastic-Damage Model for Concrete. International Journal of Solids and Structures 25, 299-326.
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], 2018. Simple limit analysis approach for the optimal strengthening of existing masonry towers, AIP Conference Proceedings. AIP Publishing, p. 450007.
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2019. A full-scale timbrel cross vault subjected to vertical cyclical displacements in one of its supports. Engineering Structures 183, 791-804.
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON], 2014. Building Information Modeling (BIM) for existing buildings -- Literature review and future needs. Automation in Construction 38, 109-127.
|
isprs
|
DIGITAL MODELLING AND ANALYSIS OF MASONRY VAULTS
|
G. Angjeliu, G. Cardani, D. Coronelli
|
https://doi.org/10.5194/isprs-archives-xlii-2-w11-83-2019
| 2,019
|
CC-BY
|
isprs/b8b8ede2_be99_48be_86f6_41f659292041.md
|
# Flat versus hemispherical dome ports in underwater photogrammetry
[PERSON]
[PERSON]. [PERSON]. [PERSON]
3D Optical Metrology unit, Bruno Kessler Foundation (FBK)
Email (flemema, nocerino, remondino)@fbk.eu, Web: [http://3 dom.fbk.eu](http://3 dom.fbk.eu)
###### Abstract
Underwater photogrammetry, _Ike_ is counterpart in 'air', has gained an increasing diffusion thanks to the availability of easy-to-use, fast and often quite inexpensive software applications. Moreover, underwater equipment _that_ allows the use of digital cameras normally designed to work in air also in water are largely available. However, for assuring accurate and reliable 3D modelling _results_ a profound knowledge of the employed _devices_ as well as physical and geometric principle is even more crucial than in _air_. This study aims to take a step forward in understanding the effect of underwater ports in front of the photographic lens. In particular, the effect of dome or flat ports on image quality in 3D modeling applications is investigated. Experiments conducted in a semi submerged industrial structure show that the tested flat port performs worse than the dome, providing higher image residuals and lower precision and accuracy in object space. A significant _different_ quality _per colour channel _is also observed and its influence on achievable processing results is discussed._
Footnote †: This contribution has been peer-reviewed.
doi:10.5194/isprg-archives-X.ll-2-W3-481-2017
Underwater, Photogrammetry, Camera calibration, Flat port, Hemispherical dome port, MTF
## 1 Introduction
### Image quality in underwater photogrammetry
Underwater photogrammetry based on consumer grade photographic equipment is getting very popular in the last few years. Underwater housings _are available for a big range of digital cameras, sometimes designed and sold by the manufacturer of the cameras themselves, sometimes from _third parties_ companies.
Thanks to the great availability and flexibility in the configuration, photogrammetry experts and not are proving the importance of these systems as tools for documenting in 3D the underwater environment through _SM_ and _photogrammetry in archaeology, biology, engineering, oceanography etc.
Sport cameras such as the very popular GOPRO HERO as well as SRL cameras have been tested and calibrated using the two most common popular approaches such as the rigorous ray tracing ([PERSON] and [PERSON], 2010) or self-calibrating bundle adjustment ([PERSON], 2015; [PERSON] et al., 2016). Up to now, the effect of optical aberrations in underwater photogrammetry has not been investigated. In the authors' knowledge, there are no papers that consider a diverse image quality across the sensor format nor the influence of a specific port setup on camera acquisition settings (depth of field, minimum focus distance, etc.) has been investigated.
This paper investigates the effect of using a dome or flat ports in underwater photogrammetry with respect to 3D modelling applications. In particular, the effect of the diverse image quality of flat and dome ports over the accuracy of the final 3D model is presented.
### Flat and dome ports
Photographic cameras normally _used on_ _hard, above the water, need a special housing _with a flat or dome port to be used also in water. Looking _to the underwater scene through _a flat or dome port has many optical consequences of which the most known is that the field of view of the lens mounted on the camera is preserved in case of a dome port and reduced of a factor (almost equal to the refractive index) for the flat port. In general, this common rule is satisfied but there are many other factors that intervene in the optical formation of the image some with very important practical implications _that may make the choice of a type of port with respect to another one not as trivial as described above._
_Lenses used in photography _are designed to minimize _optic.a_ aberrations throughout the entire image format. Residual aberrations are always present and their amount is depending on the optical design, quality of glasses and therefore cost of the lens. Nowadays, even the cheap kit zoom bundled with consumer cameras, provide an acceptable image quality for less demanding photogrammetric purposes. Nevertheless, when used underwater behind flat and dome ports, image quality, even for the most expensive cameras, undergoes a quite visible degradation due to the modification of the entire optical design. Depending on the combination of the used lens and port (spherical dome or flat), the consequences on the overall image quality may be disappointing. Figure 1 depicts the main optical effects over the field of view with dome and flat ports._
The spherical surface of the dome _has the effect that optical rays converging to the centre of the sphere do not change their direction _as they enter the surface of the dome perpendicularly. The main consequence of this pheno
Figure 1: Field of view underwater with dome (left) and flat port (right).
the field of view is kept unchanged also underwater. Nevertheless, if the centre of the dome is not aligned with the entrance pupil of the lens, modifications of field of view and distortions may be introduced ([PERSON] et al., 2016; [PERSON] et al., 2016). Unfortunately, spherical domes have as main drawback that they act as negative lenses that form a virtual image very close to the spherical dome. The distance between the dome centre and the virtual image is approximately three times the radius of the sphere thus meaning for example that an object at infinity would be projected at only 30 cm from the entrance pupil of the lens when using a 10 cm radius spherical port. If the camera lens were not able to focus at that close distance, the image would result blurred and thus unusable. The solutions for this problem are that one may use a bigger dome or add an additional close-up ditore lens to the front of the camera lens to reduce its minimum focus distance. Unfortunately, the cost for manufacturing spherical domes grows significantly with its radius and some lenses, like for example reshieve lenses, cannot accept additional close up ditores. In these cases, a flat port must be used.
The first two versions of the popular sport camera GOPRO used to have a spherical dome port mounted on the front of the lens of the underwater pressure housing (Figure 2 left-up). Because of the very small radius of the dome, the virtual image would project too close to the camera, far behind the limits of the depth of field of the fixed focus fisheye lens of the camera thus resulting in slightly images ([[https://gopro.com/support/artic-les/underwater-focus](https://gopro.com/support/artic-les/underwater-focus)]([https://gopro.com/support/artic-les/underwater-focus](https://gopro.com/support/artic-les/underwater-focus))).
Successively, GOPRO released an optional replacement underwater housing for the GOPRO HERO 1 and 2 with a flat port and then from the versions 3 onward only pressure housings with flat ports were released which at cost of a reduced field of view provide sharper images. Another undesired effect introduced by spherical dome ports is that it makes the images softer towards the corners of the image ([PERSON] et al., 2016).
Flat ports have the advantage of a much simpler and thus cost effective manufacturing and are largely used in pressure housings of sport cameras or very compact zoom lens cameras. One of the main drawbacks of flat ports is that the field of view is reduced of a factor that is approximative equal to the refraction coefficient of the water. Additionally, the maximum field of view allowed by this ports is 96 degrees, therefore fisheye cameras like the GOPRO cameras, when used behind a flat port lose their extremely wide angle feature (Fig. 2 right).
Wide angle lenses used with flat ports show strong chromatic aberrations toward the corners of the image.
The effect of chromatic aberrations in close range photogrammetry has been investigated mainly for high precision photogrammetric applications where the effect of separate image measurements for each colour channel can lead to a significant accuracy enhancement of a factor about 1.3 ([PERSON] et al., 2006). Other rigorous methods that consider a colour or multi spectral dependent calibration and processing are discussed in ([PERSON] et al., 2016; [PERSON] et al., 2014; [PERSON] et al., 2012).
Optical aberrations in underwater photography are much more severe and difficult to correct with respect to those generally seen for regular cameras above the water. If the quality of the image is important in close range photogrammetry it is even more important for underwater photogrammetry were a correct framing of the subject can be difficult to achieve underwater leading generally to not as well-structured acquisitions as those achievable above the water; thus any part of the image can be fundamental for an accurate reconstruction of the scene and for image interpretation purposes.
## 2 Experimental work
The application presented in this contribution is part of a wider project called OptiMMA (Optical Metrology for Maritime Applications.
[http://3 dom.fk.eu/projects/underwater-photogrammetry-maritime-applications](http://3 dom.fk.eu/projects/underwater-photogrammetry-maritime-applications)).
### The modelled heritage structure
A semi-submerged industrial structure located in the Bay of Roggio near Livono (Italy), today abandoned and under consideration for restoration (Fig. 3), was used as test site. The structure was used as a port and support for cave and cement plan activities carried out nearby at the beginning of the 20 th century. A combination of close-range photogrammetry above and under the water according to the procedure described in [PERSON] et al. (2015, 2013) was chosen for modelling the structure.
Figure 3: The surveyed port structure in the bay of Roggio near Livono, Italy.
Figure 2: GOPRO HERO done and flat ports. An early underwater housing with dome port for GOPRO HERO 1 and 2 cameras (left-up). An optional flat port replacement housing for GOPRO HERO 1 and 2 (left-down). Aschemite view of the reduced field of view because of flat port critical angle (right).
The part of the structure underwater was photographed twice, the first time using a dome port and the second time using a flat port mounted on the same pressure housing. For the whole survey the sky was overcarset limiting the effect light ripples on the underwater structures. The aim is to investigate the effect of dome and flat ports on the 3D modelling results.
### Photographic equipment
A Nikon D750 24 Mpx full frame camera mounting a Naktor AF 24 mm f2.8/D wide angle lens was put in a NMAR NI3D750 ZM pressure housing (Fig. 4). In order to guarantee the highest accuracy each image acquisition was carried out with fixed focus set for the first image of the sequence. The distance to the object was kept constant through both visual references and using ropes with marks. Between the different surveys, just the port was changed and then the adjustment of the focus done. A Nikon SB700 strobe mounted in a dedicated NMAR housing was used for the underwater calibrations.
### Underwater camera calibrations
Before carrying out the survey of the structure, preliminary calibrations using an in-house portable testifield (Fig. 5) were performed to assess the optical quality of the photographic system used and its potential accuracy when used with flat and dome ports. The portable testifield was specifically designed by the authors for underwater calibrations, it measures 15x100 cm\({}^{2}\) and consists of three Dibond panels each of 100x50 cm\({}^{2}\) fixed on an aluminium frame. 6 plates stand with different heights from the main planar surface of the portable testifield providing a maximum depth of 20 cm. A total of 160 circular coded targets are regularly distributed over the testifield, furthermore, the targets are designed with a black square background that allows MTF measurements. Also, other resolution wedges and cokour checkboards are present.
The portable testifield was laid down at a depth of about 5 meters and photographed from an average distance of about 1.2 m for the dome port and 1.6m for the flat port. The ground sample distance (GSD) was about 0.3 mm for both the calibrations. An aperture value of f/11 was chosen for both the flat and dome ports. About 30 images per each port were collected using quite a standard self-calibration protocol with multi-view convergent images and roll diversity ([PERSON], 1997). The image acquisitions were carried out in sequence, the dome port first and the flat port after.
As expected, from the visual analysis of the acquired images, while the dome port kept the barrel distortion of the lens almost unchanged (Fig. 6a), the flat port introduced a heavy pincusion distortion (Fig. 6b). Furthermore, the image quality for the flat port resulted severely different between the centre (Fig. 6c) and the corners showing some severe chromatic aberrations (Fig. 6d) and some blur astigmatism that was different per red, green, and blue channels with the blue channel behaving the worst (Fig. 7). From the successive bundle adjustments with self-calibration (Brown model formulation with radial and decentring distortions) the flat port also performed quite significantly worse than the dome port. The reports of the bundle adjustment highlighted a higher potential accuracy for the dome port with respect to the flat port (image observation from green channel for both ports). Table 1 synthetically summarizes the results together with reference values for the same camera lens system calibrated above the water in the laboratory of 3 DOM research unit.
The camera calibrations anticipated a minimum reduction of the potential accuracy in object space of a factor about 2 with the dome port and 4 with the flat port.
### Targeting of the industrial structure
A rectangular basin measuring about 20x10m\({}^{2}\), part of the industrial structure, was chosen to perform the comparative programme tests (Fig. 8).
8 plates with 8 coded target each were placed across the waterline (Fig. 9). The coded targets were placed for the twofold aim of: (i) allowing to register the underwater and above the water 3D models and (ii) having well and uniquely defined 3D points to perform comparisons between flat and dome port underwater surveys. The relative positions of the targets on the plates are known by laboratory calibration, thus by measuring at least three non-collinear targets in the underwater or above the water photogrammetric - surveys, the 3D coordinates of the remaining targets can be computed through a similarity transformation. By means of these procedure common points between underwater and above the water surveys can be derived and the two 3D models registered together. Some tape length measurements were carried out to scale the object.
### Planning and acquisition of the underwater and above-the-water camera are network
Since from the preliminary calibrations the two ports behaved differently, a much evident discrepancy between the two ports would be expected in elongated strips such as those usually carried out for surveying big objects in photogrammetry. Indeed, systematic residual errors not properly modeled by the camera calibration parameters are expected to accumulate along the strip, thus leading to global object deformation as seen in [PERSON] et al. (2014). Thus, the camera network planned to image the rectangular basin consisted in a singular open loop strip taken at a distance of about 2 m from the vertical walls for the dome port and 2.6 meters for flat port to obtain for both the ports a GSD of about 0.5 m. A 80% overlap was considered along the strip and some convergent and rolled images were taken to improve the self-calibration (especially considering the geometric characteristics of the object that results flat within the field of view of the single images). The image acquisitions were carried out in sequence, the dome port was used as first and the flat port after (Figure 10a-b). The maximum depth was 1.5m, water temperature was about 15 degrees and the underwater image acquisition required about 3 hours in total. The part above the water was surveyed with the same camera without the pressure housing. The available side walking path was used to photograph the structure on the opposite side thus leading to an average distance to the object of about 12 m (GSD about 3 mm). Same 80% overlap with rolled and convergent image acquisition protocol was used above the water.
### Image orientation and bundle adjustment with self-calibration
The three image datasets, two underwater and one above the water were processed with the same procedure. A non expert user scenario for basic 3D modelling purposes (e.g. a preliminary 3D investigation of the structure) was simulated. The images were automatically oriented using Agisoft Photoscan where self calibration on radial and decentring distortion parameters were computed. The final camera network for the dome and flat ports is shown in Figure 10 (c.d).
Figure 8: An aerial view of the port structure (left) and an enlarged sight of the rectangular basin chosen for the tests with a schematic view of the photogrammetric strip acquired (right).
Figure 7: The colour dependant astigmatic aberrations noticed with the flat port (red on the left, green in the centre, blue on the right)
Figure 9: Putes with coded targets used in the experiment.
resulted 1.4 um, more than three times worse. In general, the self calibration parameters of the flat port were an order of magnitude worse than those computed for the dome port. Such a worse precision is expected to be a source of systematic errors that accumulates along the photogrammetric strip and \"vent\" into the object space leading to a stronger global deformation of the 3D model for the flat port. Therefore, as shown in [PERSON] et al. (2014), over the 70 meters linear perimeter of the underwater basin, the global deformations can reach some centimeters, even if the GSD was sub-millimetric.
For the above the water dataset as expected, the precison of calibration parameters was much higher with a standard deviation for the focal length of 0.1 um more than ten times better than the the one of flat port.
The three datasets were scaled using a combination of length measuremts provided by the plates and some tape measurements. A maximum scaling error of about 0.2% was estimated from the residuals on the reference known lengths.
### Accuracy and 3D analysis in object space
A simple evaluation was carried out to asses the accuracy of the two underwater surveys. A reference tape measurement distance (estimated accuracy ca. 1 cm) was taken between the two plates facing each other at the entrance of the rectangular basin and compared with those obtained from the underwater survey (Table 2). Being the two plates at the beginning and end of the strip the resulting discrepancy can be seans a loop closure error. An error about 30 cm was observed for the flat port.
From a preliminary comparative analysis of the bundle adjustment parameters retrieved by Photoscan for the two underwater datasets, again the flat port confirmed a less precise solution.
Systematic residual errors in the image space are evident for both the ports but in the flat port they were higher in magnitude and especially concentrated at the left and right borders of the image format (Fig. 11).
A very important difference was observed on the self calibration parameters. While for the dome port the standard deviations for the focal length was 0.4 um, for the flat port the standard deviation resulted 1.4 um, more than three times worse. In general, the self calibration parameters of the flat port were an order of magnitude worse than those computed for the dome port. Such a worse precision is expected to be a source of systematic errors that accumulates along the photogrammetric strip and \"vent\" into the object space leading to a stronger global deformation of the 3D model for the flat port. Therefore, as shown in [PERSON] et al. (2014), over the 70 meters linear perimeter of the underwater basin, the global deformations can reach some centimeters, even if the GSD was sub-millimetric.
For the above the water dataset as expected, the precison of calibration parameters was much higher with a standard deviation for the focal length of 0.1 um more than ten times better than the the one of flat port.
The three datasets were scaled using a combination of length measuremts provided by the plates and some tape measurements. A maximum scaling error of about 0.2% was estimated from the residuals on the reference known lengths.
### Accuracy and 3D analysis in object space
A simple evaluation was carried out to asses the accuracy of the two underwater surveys. A reference tape measurement distance (estimated accuracy ca. 1 cm) was taken between the two plates facing each other at the entrance of the rectangular basin and compared with those obtained from the underwater survey (Table 2). Being the two plates at the beginning and end of the strip the resulting discrepancy can be seans a loop closure error. An error about 30 cm was observed for the flat port.
### Photogrammetric processing of the single R,G,B channels
As already mentioned in section 2.3, the three channels for the flat port showed different image quality, especially at the corners. The three R,G channels for both the flat port and dome port were then extracted from the RGB images and saved as single channel image to be processed separately.
For the flat port only the Red and Green channels succeeded the orientation stage. On the contrary, the images in the Blue channel probably resulted too blurred and were only partially oriented. The three channels for the dome port could be oriented without any particular difficulty.
Being the images in the three channels taken from the same position and exactly with the same camera network, the results in object space are expected to be not significantly different between them. Thus an inner comparison between the three channels of each port was performed by comparing the 3D coordinates of the plates obtained separately from each channel. According to Photoscan manual, the default processing considers
\begin{table}
\begin{tabular}{|c|c|c|} \hline Reference distance & DOME & FLAT \\ \hline
4.723 m & 4.729 m & 5.015 m \\ \hline
**Error\(\xrightarrow{}\)** & **0.006 m** & **0.292 m** \\ \(\text{(c123 GSD)}\) & **(c600x GSD)** \\ \hline \end{tabular}
\end{table}
Table 2: Summary of the length checks
Figure 10: Underwater image acquisition (a) and a sample image from the dome port (b). Final camera networks for the dome port (c) and flat port (d).
Figure 11: Systematic residual errors for the dome port (left) and flat port (right)
a combination of the three R,G,B channels. Thus being the previous results (Section 2.6 and 2.7) obtained in default mode, they were used as reference for the relative comparisons.
A similarity transformation with isotropic scale factor was computed to compare the 3D coordinates. The Euckdean distances between same points were used as measure of discrepancy. Table 3 summarizes the relative comparison for each channel per each port reported as RMS and maximum discrepancy between 3D points. A maximum difference of 23 cm is observed between the red channel of the flat port and the RGB combination for the same flat port. The solutions between the three channels of the dome port result more consistent between themselves.
The discrepancy between the reference distance and the one measured in the green channel of the flat port reduced from 29 to 21 cm (<400x GSD). The difference between RGB combination and green channel for the dome port was not significant, stating the accuracy of the reference measurement.
### 3D modelling of the structure
Dense point clouds were computed at 1/8 and 1/4 linear resolution respectively for the dome port and the above-the-water photogrammetric: surveys corresponding to a spatial resolution of 4 mm in the object space. An optimized mesh according to [PERSON] et al. (2015) was wrapped over the manually cleaned point clouds for each dataset. The joint alignment procedure presented in [PERSON] et al. (2015, 2013) was used for registering the underwater mesh with the one above the water. The RMS of the transformation was some 3 cm for the dome port and 13 cm for the flat port. Figure 12 shows some renderings of the basin after the alignment of the underwater (dome port) and above-the-water 3D models.
## 3 Discussions and future works
The paper investigated the effect of the diverse image quality of flat and dome ports over the accuracy of the final 3D model obtained through photogrammetric procedures. The paper highlighted the importance of image quality over the global accuracy of the final 3D model. Image quality underwater undergoes a very evident degradation due to the sum of optical phenomena arising from both the pressure housing and port used and the physical and environmental properties of water itself. Indeed, due to the combination of optical aberrations such as as astigmatism, heavy distortions and chromatic aberrations plus a non-complete modeling of unknown systematic image errors, strong global deformations were observed and assessed trough simple length measurements for the two ports. A very high error of some 29 cm was found with the flat port. Preliminary calibrations on a portable testfield anticipated a degradation of accuracy when using the flat port by reporting high RMS of image residuals, a less precise calibration (worse standard deviations for camera parameters) and a lower 3D point precision in object space. A signifcant different image quality per colour channel was observed and different processing for each colour carried out. As expected, the green channel performed more similarly to the RGB combination than the other channels as the digital sensor of the Nikon D750 uses the Bayer filter array. The green channel for the flat port provided an improved accuracy of 33% with respect to the processing obtained from the combination of the R,G channels. The blue channel proved to be the most problematic and might probably degrade the accuracy when combined with the other channels. This test was important because software applications may combine the three channels by default, which may be not the best procedure for underwater photogrammetry.
The issues risen by this study may deserve more experimental tests for example using different housings and ports. Having observed a strong difference between image quality between the centre and corners, successive tests will take into account a different weighting for image observations according to optical quality parameters (e.g. Modulation Transfer Function-MTF).
Figure 12: Renderings of the basin after the alignment of the underwater and above-the-water 3D models.
\begin{table}
\begin{tabular}{c|c|c|c|c|} \cline{2-5} & \multicolumn{2}{c|}{DOME} & \multicolumn{2}{c|}{FLAT} \\ \hline & RMS [m] & Max [m] & RMS [m] & Max [m] \\ \hline RED & 0.005 & 0.013 & 0.096 & 0.229 \\ \hline GREEN & 0.003 & 0.006 & 0.023 & 0.055 \\ \hline BLUE & 0.010 & 0.023 & n/a & n/a \\ \hline \end{tabular}
\end{table}
Table 3: Summary of the relative comparison between 3D coordinates obtained from the single R,G channels and the one obtained as RGB combination for each port.
## Acknowledgements
The authors would like to thank Dr. [PERSON] from Politecnico di Milano and [PERSON] for supporting the diving activities during the underwater calibrations. A thank to NiMAR which supported this research by provoking the photographic underwater equipment and for the useful insights about pressure housings manufacturing techniques; [PERSON] and [PERSON] from the Associazione Sportive Dilettantitica Club Subacoque Rane Nere Trento who supported the preliminary tests in the swimming pool in Trento.
## References
* [PERSON] (1997) [PERSON], 1997. Digital camera self-calibration. _ISPRS Journal of Photogrammetry and Remote sensing_, Vol. 52(4), pp.149-159.
* [PERSON] et al. (2006) [PERSON], [PERSON] and [PERSON], 2006. Modelling of chromatic aberration for high precision photogrammetry. _ISPRS Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences_, Vol. 36(5), pp. 173-178.
* [PERSON] et al. (2016) [PERSON] [PERSON], [PERSON], [PERSON] and [PERSON], 2016. Accuracy assessment of OOPRO Hero 3 (Black) camera in underwater environment. _ISPRS Int. Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, Vol 41(B5), pp.477-483.
* [PERSON] et al. (2012) [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON], 2012. Evaluation of correction methods of chromatic aberration in digital camera images. _ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences_, Vol. I-3, pp. 49-55.
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], [PERSON], 2016. Geometric and optic characterization of a hemispherical dome port for underwater photogrammetry. _Sensors_, Vol. 16(1).
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], [PERSON], 2016. Underwater calibration of dome port pressure housings. _ISPRS Int. Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, Vol. 40(3-W4), pp.127-134.
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], [PERSON], 2015. Joint alignment of underwater and above-the-water photogrammetric 3D models by independent models adjustment. _ISPRS Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences_, Vol. 40(5), p.143.
* [PERSON] et al. (2013) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2013. A photogrammetric approach to survey floating and semi-submerged objects. _Proc. of Videometrics, Range Imaging and Applications XII, SPIE Optical Metrology_, Vol. 8791, doi: 10.1117/12.2020464.
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON] and [PERSON], [PERSON], 2014. Accuracy of typical photogrammetric networks in cultural heritage 3D modeling projects. _ISPRS Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences_, Vol. 40(5), pp. 465-472.
* [PERSON] et al. (2016) [PERSON] [PERSON], [PERSON] and [PERSON], 2016. Influence of raw image preprocessing and other selected processes on accuracy of close-range photogrammetric systems according to VDI 2634. _ISPRS Int. Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, Vol 41(B5), pp.107-113.
* [PERSON] et al. (2015) [PERSON] [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], [PERSON], 2015. 3D surveying & modeling of underground passages in WWI fortifications. _ISPRS int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences_, Vol. 40(5), pp. 17-24
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON] and [PERSON], 2014. Multispectral calibration to enhance the metrology performance of C-mount camera systems. _ISPRS Int. Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences_, Vol. 40(5), p.517.
* [PERSON] (2015) [PERSON], 2015. Calibration techniques for accurate measurements by underwater camera systems. _Sensors_, Vol. 15(12), pp. 30810-30826
* [PERSON] and [PERSON] (2010) [PERSON] and [PERSON], 2010. Photogrammetric modeling of underwater environments. _ISPRS Journal of Photogrammetry and Remote Sensing_, Vol. 65(5), pp.433-444.
|
isprs
|
FLAT VERSUS HEMISPHERICAL DOME PORTS IN UNDERWATER
PHOTOGRAMMETRY
|
F. Menna, E. Nocerino, F. Remondino
|
https://doi.org/10.5194/isprs-archives-xlii-2-w3-481-2017
| 2,017
|
CC-BY
|
isprs/3421c93a_269d_46f9_b83d_9edf6e0a5a97.md
|
The Application of Chinese High-spatial-resolution Remote Sensing Satellite Image in Land Law Enforcement Information Extraction
[PERSON]
Rujun Yang
Land and Resources Information Center of Guangxi Zhuang Autonomous Region, Nanning, China - (995023479, 1175890842)@qq.com
Footnote 1: Corresponding author. [PERSON], E-mail: [EMAIL_ADDRESS]
###### Abstract
Chinese high -resolution (HR) remote sensing satellites have made huge leap in the past decade. Commercial satellite datasets, such as GF-1, GF-2 and ZY-3 images, the panchromatic images (PAN) resolution of them are 2m, 1m and 2.1m and the multispectral images (MS) resolution are 8m, 4m, 5.8m respectively have been emerged in recent years. Chinese HR satellite imagery has been free downloaded for public welfare purposes using. Local government began to employ more professional technician to improve traditional land management technology. This paper focused on analysing the actual requirements of the applications in government land law enforcement in Guangxi Autonomous Region. 66 counties in Guangxi Autonomous Region were selected for illegal land utilization spot extraction with fusion Chinese HR images. The procedure contains: A. Defines illegal land utilization spot type. B. Data collection, GF-1, GF-2, and ZY-3 datasets were acquired in the first half year of 2016 and other auxiliary data were collected in 2015. C. Batch process, HR images were collected for batch preprocessing through ENVI/IDL tool. D. Illegal land utilization spot extraction by visual interpretation. E. Obtaining attribute data with ArcGIS Geoprocessor (GP) model. F. Thematic mapping and surveying. Through analysing 42 counties results, law enforcement officials found 1092 illegal land using spots and 16 suspicious illegal mining spots. The results show that Chinese HR satellite images have great potential for feature information extraction and the processing procedure appears robust.
1
Footnote 1: Corresponding author. [PERSON], E-mail: [EMAIL_ADDRESS]
## 1 Introduction
China' has been experienced dramatic urbanization in the past decade. Meanwhile, rural illegal buildings, illegal land use and illegal constructions also have been emerged in urban and rural areas, especially in large cities. In order to improve the work efficiency of law enforcement and supervision for land resources, officials of land and resources began to adopt remote sensing technology to solve these problems since 2010. But it cost very expensive for purchasing HR data such as SPOT, WorldView, Geoye etc. Through comparing images which acquired at various times, detailed land use change can be found with an accurately and rapidly manner. In the past, local land and resources management departments neither afford the price for HR images nor have skilled technicians. But in recent years, domestic HR satellite imagery in China has been freely acquired for public welfare aimed using. For commercial application, the highest spatial resolution of optical data product can reach to 0.5m.
For GF1, GF2 and ZY3 HR images, the PAN images spatial resolution of them are 2m, 1m and 2.1m, and the corresponding MS images resolution are 8m, 4m, and 5.8m respectively. China's domestic HR satellite image has been used for many aspects so far. For remote sensing monitoring of land use change survey, [PERSON] compared GF1 with SPOT5 and RapidKey from spatial resolution and spectral resolution ([PERSON] et al., 2015). [PERSON] focused on the using of domestic HR images for object-oriented change detection and extraction of construction areas ([PERSON] et al., 2016). [PERSON] used GF1 data for land cover dynamic monitoring ([PERSON] et al., 2016). Since years of practice in land law enforcement, the current situation for remote sensing monitoring is very obvious. Local governments have technical ability for data processing and extracting illegal land utilization information ([PERSON] et al., 2016). For Guangxi government which desired to get illegal land utilization spots timely and effectively, but the local officials' work are generally passive, which caused high administrative accountability and risk. Therefore, this paper focuses 66 counties of Guangxi as study area, through analysing the domestic HR data in the first half of 2016 to monitor land use change, extracting the information by RS, GIS technology and validating the results.
## 2 Methodology
This research shows the method and experimental result of illegal land utilization spot extraction from fusion Chinese HR images in Guangxi. The procedure contains: A. Defines illegal land utilization spot type. B. Data collection. C. Chinese HR remote sensing image under batch preprocessing, D. Illegal land utilization spot extraction through visual interpretation. E. Obtaining attribute data by ArcGIS GP model. F. Thematic mapping and surveying.
### Define Illegal Land Utilization Spot Type
The first and most important thing is to define the illegal land utilization spot type. From Ministry of Land and Resources of China, illegal land use, illegal approval of land use and destroying cultivated land are the behaviour of illegal land utilization, which are also the key point for law enforcement and supervision of land resources. In Guangxi practical work, three types of illegal land utilization spot are defined as follows.
1. First class spot: permanent construction or temporary buildings and structures in the scope of agricultural land and unused land without authorization, such as buildings, roads and public facilities.
2. Second class spot: recent filling and bulldozing in the scope of agricultural land and unused land without authorization. It always occurred nearby the road and construction land.
3. Third class spot: construction or buildings and structures spots ordered to dismantle last year but still exist in this year.
### Image Batch Preprocessing
Data preprocessing for HR images one by one is a traditional way which costs much time and labour. In our research, ENVI/IDL batch progress tool is selected for data preprocessing. ENVI is a useful geospatial software solution to process and analyse various types of imagery and data such as multispectral, hyperspectral, LiDAR, and SAR. The secondary development of ENVI is based on its API and IDL. Most image processing functions of ENVI are provided by ENVI Routines or ENVITask, which is also a new object-oriented image processing API model since ENVI version 5.1.
In this research, after downloaded data from website, the following step was data preprocessing which unzipped the files required form data supplier by function FILE_UNTAR. Then orthographic correction was operated by ENVITask (RPCOrthoreticification?) with its own RPC file, and the results with WGS84 coordinate system. Certain fine features were only visible on PAN but were difficult to discern on MS. To fully utilize the HR of PAN and rich spectral information of MS, a pan-sharpening process was carried out. ENVITask(NNDiffusePanSharpening?) function was adopted to fuse multispectral raster and panchromatic raster. Nearest neighbour diffusion based pan sharpening algorithm for spectral images used the pixel spectrum as its smallest unit of operation and generated resolution-enhanced spectral images using a mixture model, which was different from the most existing algorithms which processed each band separately ([PERSON] et al. 2014). Custom Task yielded by ENVITASK (GenerateTiePointsByGrossCorrelation), ENVITASK (FilterTiePointsByGlobalTransform?), and ENVITASK (ImageImageRegistration) corrected the fusion data to reference data, since the fusion data coordinate system was different from the reference data. The detailed data batch progressing procedure was shown in figure 1.
By running a batch process in ENVI API, this research used a single process (or a chain of processes for that matter) on a list of files, and output results to a location. An ENVI batch script using ENVITasks should contain 5 elements which can make the process work properly.
1. Start ENVI application, preferably in HEADLESS mode since the UI is not needed.
2. Initialize ENVITask and set constant parameters.
3. Generate a list of input files to do processing on.
4. Create an output filename - one-to-one match the input file.
5. Run the processing over each file in a loop with the parameters of choice.
### Illegal Land Use Spots Extraction
There are three ways for land utilization change detection, including computer interpretation automatically, half-auto computer interpretation and artificial visual interpretation. At present, the mature automatic interpretation methods are mostly based on traditional classification methods, including pixel-based, object-oriented and target-oriented ([PERSON] 2010; [PERSON], [PERSON], and [PERSON] 2010; [PERSON] et al. 2010; [PERSON], [PERSON], and [PERSON] 2011). But the accuracy are generally low in practical work because of complicated situations ([PERSON] et al., 2012), thus aggravate the stuff work in examining the results. Therefore, information extracting from artificial visual interpretation was done according to image contrast of the same region with texture, colour and shape information. ARCMAP 10.3 was used in this research. First, extracting non-constructive areas from land use data and overlaid them on corrected fusion data which acquired in the calendar year of 2016. Second, identifying typical features such as buildings, roads and cultivated land from uncovered part based on object's spatial, spectral, and texture characteristics. Third, comparing the results to the 2015 reference images by swipe tool, created map spot delineation with a vector form (shape file) based on illegal land utilization spot definition which mentioned above. The procedure shows as follows:
### Obtaining Attribute Data by GP Model
ModelBuilder is a visual programming language for building geoprocessing workflows in ArcGIS. Geoprocessing models can record the spatial analysis and data management processes automatically. This model can be described in the form of combined chains which contains sequences of processes and geoprocessing tools, one output of former process can be used as the input for next process. We created and modified
Figure 1: Data batch preprocessing procedure
Figure 2: Suspicious illegal land use spots extraction procedure
geoprocessing models in ModelBuilder to obtain attribute data such as coordinate data, administrative areas, and occupied land areas. Illegal land utilization spot is composed of several land use types such as agricultural land, construction land and unutilized land. Obtaining each land use types' area data model can be described as follows:
As shown in Figure 3, the ellipse represents vector data, the square with a hammer represents data processing tool, suspicious spot is illegal land utilization spot, agricultural land, cultivated land and unutilized land are extracted from land use data in 2015. In this model, six fields were firstly added for the illegal land utilization spot vector. JCH2016 represents ID for each spot, JCH2015 represents ID for building demolation data. NYD_MJ, GD_MJ, WLYD_MJ, JSVD_MJ, JCM represent areas of agricultural land, cultivated land, unutilized land, construction land and illegal land use spot. Then, the manipulation of intersect section can be used to calculate a geometric intersection from input features. The whole or portions of features which overlap in feature classes will be written to output. The manipulation of dissolve part can be used to aggregate features based on specified attributes. The Add Join option here can be used to obtain the administrative region and acquisition time of satellite image. The coordinate of illegal land use spot were calculated based on the values from attribute table. At last, the attribute table were sorted as below.
## 3 Experiment
### Study Area and Data Acquisition
Our study area is located in Guangxi (20\({}^{\circ}\) 54N - 26\({}^{\circ}\) 24N, 104\({}^{\circ}\) 26E- 112\({}^{\circ}\) 04E), south China, with a total area of 236700 km\({}^{2}\), totally 111 counties. Most of the area is surrounded by mountain, the central and south regions are mostly flat. ZY3, GF1, and GF2 data between January and August in 2016 were acquired, considering the condition of previous illegal land utilization, 66 counties were selected as hot spots to monitor land use change, and the illegal land utilization spots were extracted based on these analytic results. Figure4 shows the acquired datasets and study area.
For public-benefit use, we can freely get GF1, GF2, and ZY3 datasets from Guangxi Bureau of Surveying, Mapping and Geoinformation (State Administration of Science Technology and Industry for National Defence, 2015). The corresponding resolutions of these images are shown in table 2 (China Centre for Resources Satellite Data and Application, 2015).
Figure 5 shows panchromatic, multispectral and fusion images of ZY3, GF1 and GF2. The details of these images show the good results of MS data and PAN data fusion, the fused images not only contain the spectral information from multispectral bands, but also the spatial information from panchromatic band. In these images, country road, freeway, buildings, villages, ponds, rivers and farmlands can be recognised clearly and easily. In the process of visual interpretation, illegal land utilization spot can be precisely extracted based on objects' spatial, spectral, and texture characteristics.
Be limited to cost and labour, the team can't undertake external inspection, so these spots has been issued to local County Land and Resources Management Departments in shape file with an attachment of EXCEL table which has village name, X Coordinates, Y Coordinates, and the area of spot et al.
Figure 6 also reveals the distribution of spots at the scale of 1:10000. The red polygons are suspicious illegal land utilization spots. Every map has its unique sheet number and village's name and boundaries.
### Results and Analysis
In this research, 2958 suspicious spots were produced and posted. After one month of investigation, 53 local County Land and Resources Management Departments fed back the suspicious spots information sheet. The sheets include original land use type, current land use type, and illegal land use areas. 1046 illegal land use spots and 16 illegal mines among suspicious illegal land using spots were verified. The total illegal land use area is 14.81 km' (cultivated land is 6.52 km)'. The verified illegal land use spots type include road land, industrial and storage land, public welfare and infrastructure land, rural homestead, and other land use types. The detailed records are shown in Table 3.
Figure 5: ZY3, GF1, GF2 panchromatic, multispectral and their corresponding fusion images.
Figure 6: Isused map with suspicious illegal land utilization spot in 1:10000 (red rectangle are suspicious illegal land use spots)
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multirow{2}{*}{Road} & Industrial & Public welfare & \multirow{2}{*}{Rural} & \multirow{2}{*}{Other} \\ & & and & and & & & \\ & & storage & infrastructure & & & \\ \hline Number & \multirow{2}{*}{241} & \multirow{2}{*}{146} & \multirow{2}{*}{59} & \multirow{2}{*}{207} & \multirow{2}{*}{393} & \multirow{2}{*}{1046} \\ of spots & & & & & & \\ Land & & & & & & \\ area(km)’ & 9.80 & 1.02 & 0.51 & 0.34 & 3.14 & 14.81 \\ Cultivated & \multirow{2}{*}{4.22} & \multirow{2}{*}{0.45} & \multirow{2}{*}{0.23} & \multirow{2}{*}{0.21} & \multirow{2}{*}{1.41} & \multirow{2}{*}{6.52} \\ land & & & & & & \\ area(km)’ & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Verification of illegal land use
## 4 Conclusion
In this paper, ZY3, GF1, and GF2 data were collected from Guangxi Bureau of Surveying, Mapping and Geoinformation freely. Considering the impact of cloudy weather, the obtained data from May to August occupied majority of the total data. According to practical land law enforcement work demand, defining illegal land utilization spot type is essential. In order to save work time and improve efficiency, batch pre-processing method of ENVI software was developed for dealing with Chinese HR remote sensing images. Thereafter, illegal land utilization spots were extracted by visual interpretation with ARCGIS software, this part also the most labour force consuming work. Furthermore, attribute data were obtained from GP model. Finally, the maps were produced and the EXCEL file with locations and other information of suspicious illegal land use spots were also recorded, these results with shape file form were send to local County Land and Resources Bureau. From the feedback information of local County Land and Resources Bureau, the experiment achieved a good result. It meets the significant potential of Chinese HR satellite images applications for land law enforcement. The procedure applied in this paper is efficient. The basic level Law enforcement officials also demonstrated that with the resolution of images improved, the accuracy became higher. With Chinese progress in science and technology, there are more and more HR satellites will be launched, which make the land monitoring work improved greatly with higher accuracy.
## References
* [PERSON] (2010) [PERSON], 2010. \"Object Based Image Analysis for Remote Sensing.\" _ISPRS Journal of Photogrammetry and Remote Sensing_ 65 (1):2-16. doi: 10.1016j.ispsjprs.2009.06.004.
* [PERSON] et al. (2010) [PERSON], [PERSON], and [PERSON], 2010. \"Automatic Change Detection of Buildings in Urban Environment from Very High Spatial Resolution Images Using Existing Geodatabase and Prior Knowledge.\" _ISPRS Journal of Photogrammetry and Remote Sensing_ 65 (1):143-53. doi: 10.1016j.isprsjprs.2009.10.002.
* [PERSON] et al. (2012) [PERSON], [PERSON], [PERSON], and [PERSON], 2012. \"Object-based Change Detection.\" _International Journal of Remote Sensing_ 33 (14):4434-57. doi: 10.1080/01431161.2011.648285.
* China Centre for Resources Satellite Data and Application.
* Introduction to ZY-3, GF-1 and GF-2 Satellites. (2015-11-03), (2015-11-05), (2015-11-05). [[http://www.cresda.com/EN/satellite](http://www.cresda.com/EN/satellite)]([http://www.cresda.com/EN/satellite](http://www.cresda.com/EN/satellite)).
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], et al., 2016. Research on GF 1 in the Application of the Land Cover Dynamic Monitoring. _Geomatics&Spatial Information Technology_, 39(6), pp. 63-66. (Chinese)
* [PERSON] et al. (2010) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON], 2010. \"Fast Object-Level Change Detection for VHR Images.\" _IEEE Geoscience and Remote Sensing Letters_ 7 (1):118-22. doi: 10.1109/LGRS.2009.2028438.
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2016. The Practices of the Land Law Enforcement Inspection Base on the 3S Technology. _Beijing Surveying and Mapping_, (1) pp. 139-143. (Chinese)
* [PERSON] et al. (2014) [PERSON], [PERSON], and [PERSON], 2014. \"Nearest-neighbor Diffusion-based Pan-sharpening Algorithm for Spectral Images.\" _Optical Engineering_ 53 (1):013107. doi: 10.1117/1.oe.53.1.013107.
* 362, 392. (Chinese)
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON], et al., 2016. The Application of High-Resolution Remote Sensing and Change Detection Technologies in Law Enforcement and Supervision of Land Resources, _Journal of Geo-information Science_, 18(7), pp. 962-968.DOI:10.3724/SP.J.1047.2016.00962. (Chinese)
* State Administration of Science Technology and Industry for National Defense. Temporary Measures for the Management of Major Satellite Remote Sensing Data for the Ground Observation System, 2015. _Satellite Application_ (11), pp.71-73. (Chinese)
* [PERSON] et al. (2011) [PERSON], [PERSON], and [PERSON], 2011. \"Object-oriented Change Detection Based on The Kolmogorov-Smirnov Test Using High-resolution Multispectral Imagery.\" _International Journal of Remote Sensing_ 32 (20):5719-40. doi: 10.1080/01431161.2010.507263.
|
isprs
|
THE APPLICATION OF CHINESE HIGH-SPATIAL-RESOLUTION REMOTE SENSING SATELLITE IMAGE IN LAND LAW ENFORCEMENT INFORMATION EXTRACTION
|
N. Wang, R. Yang
|
https://doi.org/10.5194/isprs-archives-xlii-3-1751-2018
| 2,018
|
CC-BY
|
isprs/9b8d612c_23ea_4fd5_b24f_424493fb63f6.md
|
Registration and Feature Extraction from Terrestrial Laser Scanner Point Clouds for Aerospace Manufacturing
[PERSON]*
[PERSON]
Department of Civil, Environmental and Geomatic Engineering, University College London
Gower Street, London WC1E 6 BT, U.K. [EMAIL_ADDRESS], [EMAIL_ADDRESS]
###### Abstract
Aircraft wing manufacture is becoming increasingly digitalised. For example, it is becoming possible to produce on-line digital representations of individual structural elements, components and tools as they are deployed during assembly processes. When it comes to monitoring a manufacturing environment, imaging systems can be used to track objects as they move about the workspace, comparing actual positions, alignments, and spatial relationships with the digital representation of the manufacturing process. Active imaging systems such as laser scanners and laser trackers can capture measurements within the manufacturing environment, which can be used to deduce information about both the overall stage of manufacture and progress of individual tasks. This paper is concerned with the in-line extraction of spatial information such as the location and orientation of drilling templates which are used with hand drilling tools to ensure drilled holes are accurately located. In this work, a construction grade terrestrial laser scanner, the Leica RTC360, is used to capture an example aircraft wing section in mid-assembly from several scan locations. Point cloud registration uses 1.5\" white matte spherical targets that are interchangeable with the SMR targets used by the Leica AT960 MR laser tracker, ensuring that scans are connected to an established metrology control network used to define the coordinate space. Point cloud registration was achieved to sub-millimetre accuracy when compared to the laser tracker network. The location of drilling templates on the surface of the wing skin are automatically extracted from the captured and registered point clouds. When compared to laser tracker referenced hole centres, laser scanner drilling template holes agree to within 0.2 mm.
A 2022
## 1 Introduction
In an increasingly digital world, improving the level of automation in manufacturing is a requirement for forward progress, particularly for increased productivity. From aircraft to automobiles to mega-ships, large-scale multi-component assembly requires high levels of accuracy and precision, and can benefit from monitoring and modelling of the complex manufacturing processes. As components move and change through different stages of the manufacturing process, they accumulate variations. For a task such as aircraft manufacturing and assembly, components with miniscule variations combine to produce unique physical products, even if the products spawned from the same basic digital design model.
For multinational manufacturing corporations, the availability of a product-specific digital twin that is metrologically accurate, instrumental to the realization of Industry 4.0, is invaluable for both short-term and long-term monitoring and maintenance. The data required to create a digital twin, spanning both the temporal and spatial domains, can be captured using imaging technologies such as laser scanning and laser tracking. Live data from the factory floor can illustrate the real-time physical state of the product, and when matched back to the design data, can ease communication of product status and be used for quality assurance.
In order to track objects with an optical measurement system as they move around a physical environment, the objects to be tracked are most often equipped with recognizable targets that can be mapped into local coordinate frames or datums. From a productivity sense, it is inefficient to place targets on every component section that goes through an assembly line. A more efficient strategy would be to place targets on the ijg structure supporting the component, such as an aircraft wing section, so that targets are independent of the manufactured object. This assumes a consistent relationship between the manufactured object and its supporting structure.
Currently, drilling templates, examples of which are shown in Figure 1, are manually affixed to the wing skin using pre-drilled alignment holes. The technique described herein explores capturing and modelling the drilling templates that are fixed to the aircraft wing surface to ensure that drill holes are made in the correct design locations. The process involves automatically and accurately locating individually shaped drilling templates placed in specified locations on the surface of an aircraft wing. Information as to correct placement and identification of the drilling templates is valuable as drilling, counterinsinking and fastening account for as much as 65% of the cost of aircraft assembly ([PERSON], 2013).
Figure 1: Example drilling templates
## 2 Related Works
### Registration
Network design is an important consideration when planning a measurement survey that requires deployment of a network of sensors. Factors such as resolution, overlap, stand-off distance, incidence angle, and targeting must be considered, as well as optimizing the amount of data captured for the desired tasks. There has been significant work done on optimal network design in geodesy ([PERSON], 1982) and photogrammetry ([PERSON], 1984) for example, but little when it comes to terrestrial laser scanning (TLS) for high precision surveys in manufacturing environments. A review of planning for scanning (P4S) using TLS in construction can be found in ([PERSON], [PERSON], and [PERSON], 2021). An important piece of work was published by ([PERSON] and [PERSON], 2019) regarding network design for scanning building interiors and exteriors with TLS. However, in most cases, TLS network planning is done empirically using the operator's prior experience and is largely dependent on the site or object of interest.
The main difference between laser scanning in buildings and laser scanning in a manufacturing space is that the former often requires 100% coverage ([PERSON], [PERSON], and [PERSON], 2003), whereas scanning for manufacturing tasks frequently needs to be a localized approach due to the required level of detail. In addition, the majority of work on TLS network design has considered surfaces typical in building facades ([PERSON] and [PERSON], 2002; [PERSON], 2005; [PERSON] and [PERSON], 2009), rather than aerospace materials such as coated metals and composites.
1.1 Targeting: Artificial targets or features installed on or around an object can refine registration and unequivocally signalise locations and features of interest, versus the use of features such as edges, holes, or structure in the light reflected from the surface itself. However, marker installation is time consuming and, in most manufacturing cases, it is desirable to avoid the placement of targets directly on the manufactured piece. Targetless registration approaches rely heavily on the geometry and optical surface characteristics of the object being scanned, the strength of the observation network and the level of overlap between neighbouring scans. Alignment errors can arise during registration that then propagate into the measurements derived from the point cloud, especially when components have repeating features, a characteristic common in manufacturing.
Alternatively, targets can be used to strengthen the registration process, and can be used to reduce the level of overlap, and therefore the number of scans, required to represent an object. [PERSON] et al. (2021) determined that using a targeted approach increases the time spent before commencing the scanning process, however, it reduces both the time spent registering the scans together and the time spent collecting the scans as less overlap is required between the station set ups compared to a targetless approach.
For large, complex objects such as aircraft components, with varying levels of surface reflection due to varying surface finishes, the density of the captured points often varies greatly based on the proximity and imaging geometry of the object to the scanner. Aircraft wings are designed to aerodynamic requirements with a high level of smooth surface continuity. As a result, each wing cross section looks very similar to its neighbours, challenging a registration process that depends on identifying unique features, especially along a single axis, such as along the length of an aircraft wing.
Finally, it must be considered that not all metrology systems can make use of the same targets. The installation of targets becomes more complex and time-consuming if each measurement device in a network requires its own physical target type to be installed in the measurement environment. There are three main kinds of TLS targets: paper targets installed on flat surfaces, paddle targets installed on magnetic mounts or survey tripods, and spherical targets. Spherical targets are largely recognized as the most precise ([PERSON] et al., 2011), with sphere fitting being used to precisely locate the targets within the point clouds. [PERSON] et al. (2011) tested both a phase-based and a time-of-flight scanner, with both types performing best when dealing with spherical targets compared to checkerboard paper and paddle targets. However, none of the tests achieved sub-millimeter level registration errors likely due to the quality of the laser scanner and registration method. It is also worth considering the challenge of using paper targets because they cannot be viewed from oblique angles, unlike paddle and spherical targets which stick out from the surface to which they are affixed.
The technique of spherical target fitting from point clouds is built-in to most scanner and third-party point cloud registration software. It has also been widely researched with various changes to the general sphere fitting method, for example by adding fine registration to the process ([PERSON] et al., 2015), dealing with occluded sphere edges ([PERSON] et al., 2014), the implementation of a modified RANSAC procedure ([PERSON], 2019) and the use of geometrical constraints to find sphere centres ([PERSON], [PERSON], and [PERSON], 2009). [PERSON] et al. (2021) investigated various registration errors and uncertainties that persist when using a laser scanner to measure gaps in aircraft wing assembly, concluding that the use of spherical targets can improve the efficiency of registration in PolyWorks.
### Feature Extraction
Once registered, manually extracting information from point clouds is often time consuming, repetitive work that can be automated with algorithms involving pattern recognition and logic-based rules. Due to the nature of manufactured objects having regular shapes embedded within their design such as lines, planes and circles, pattern recognition techniques can be used to extract and model the shapes present within a point cloud to produce a 3D digital model of the physical object.
There has been a large amount of work done on the extraction and modelling of regular shapes from point clouds. Some of the most important developmental work in the extraction of regular shapes, referred to as geometric primitives, from point clouds first used range data ([PERSON] and [PERSON], 1993; [PERSON], [PERSON], and [PERSON], 1987). Fitting shapes such as lines, planes and cylinders was further developed in ([PERSON], 1991; [PERSON], [PERSON], and [PERSON], 1998). Since then, strategies have been widely used for a variety of shape-fitting tasks, adapting over time to accommodate increasingly growing point counts as laser scanning and computer processing technologies have advanced. A comprehensive review of segmentation strategies for point cloud feature extraction, ([PERSON], [PERSON], and [PERSON], 2017), gives an overview of the state of the art in point cloud feature extraction. Currently, the task of point cloud feature extraction often uses machine learning or deep learning strategies, a review of which can be found in ([PERSON] et al., 2019).
The holes in drilling templates are a special case of circle detection because they are actually very short cylinders. Some of the prevalent work in cylinder fitting includes ([PERSON] and [PERSON], 2001; [PERSON] and [PERSON], 2005; [PERSON], [PERSON], and [PERSON], 2015; [PERSON] et al., 2019). Generalized solutions for circle fitting are largely included as a first step in the 3D problem of cylinder fitting, where a 3D data set is sliced perpendicularly to the direction of the cylinders and circles are found within the 2D slice. Examples are given by ([PERSON] et al., 2019), using Gaussian space to create a sphere ([PERSON] and [PERSON], 2001; [PERSON], [PERSON], and [PERSON], 2015) or using a 2D Hough transform to extract cylinder parameters ([PERSON] and [PERSON], 2005). Additional circle-focused solutions include the extraction on non-overlapping ellipses ([PERSON] and [PERSON], 2021), and circle fitting in MLS datasets ([PERSON], [PERSON], and [PERSON], 2018).
In previous work, holes have largely been treated as circles, as it simplifies the extraction problem into a 2D case. A small scale prototype solution was developed by ([PERSON] et al., 2017) which included a movable inspection cabin housing an optical measuring probe and imaging system to measure completed drill holes. A boundary point detection (BPD) method was developed in ([PERSON], [PERSON], and [PERSON], 2018) based on the idea that a circle created using a boundary point (BP) and its two neighbours should not include any other points. The BP detector created by ([PERSON], [PERSON], and [PERSON], 2018) was further refined by ([PERSON] et al., 2022) who introduced a density-based threshold making the point-in-circle problem more robust to small outliers. They used their circle extraction algorithm to find tiny drill holes in an aircraft engine achelle. However, their threshold was tuned to be dataset specific, and worked best only when dealing with a dataset of extremely high density (0.06 mm between adjacent points).
## 3 Methods
### Registration
In this project, white matte 1.5\" magnetized spherical scanning targets, (Figure 2), were placed in metrology nests located around a section of an aircraft wing section. Spheres were chosen as they are physically interchangeable with the 1.5\" spherical mounted reflectors (SMRs) used by laser trackers in industrial manufacturing.
First, a Leica AT960 MR laser tracker captured the SMRs in their magnetic metrology nests from multiple stations and computed their locations using a unified spatial metrology network (USMN) adjustment in New River Kinematics (NRK) Spatial Analyzer (SA) software, version 2022.2.0624.8. The spherical scanner targets were then placed in the nests and a Leica RTC360 scanned the environment from multiple locations in high resolution (3 mm at 10m). The RTC360 is a phase-based scanner with a laser wavelength of 1550 nm.
A built-in function in SA was used to automatically extract spheres from the point clouds. Given a sphere diameter, search tolerance, and minimum number of points found on the sphere, the SA algorithm can automatically find and extract spherical targets. The search tolerance is the maximum allowable deviation for a given point from the desired diameter in order to be considered a fit to the sphere (Spatial Analyzer, 2021). The diameter of the spherical targets is 1.5\" (38.1 mm), and the optimal values for the other parameters were experimentally determined to be a search tolerance of 0.3 mm and a minimum number of 50 points on the sphere surface. Alterations of these parameters either made the auto detect function find too many (false positives) or too few spheres in the point cloud. The centre of the extracted sphere is then automatically found using the ASTM E3125-17 (E57 Committee, 2017) fitting algorithms on each extracted set of sphere points.
Point clouds output from each scanner location were then registered to the common coordinate system by performing a least squares best fit 7 parameter transformation with the laser tracker-measured target coordinates as the nominals. In the work presented here, the point clouds are registered into the coordinate frame set out by the laser tracker. The quality of the registration is evaluated by comparing the difference between the centre of the sphere measured using the laser tracker, and the centre of the sphere measured using the laser scanner.
### Feature Extraction
The registered point clouds are used as the input dataset for an algorithm that automatically locates hole centres in a drilling template to identify if it has been placed within a specified tolerance. The algorithm works by extracting the close-to planar top surface of the drilling template and isolating it from the wing surface. This is done using a variation of RANSAC, originally developed by ([PERSON] and [PERSON], 1981; [PERSON] and [PERSON], 1981) and modified to include the use of M-estimators (MASC) instead of a completely random selection of seed points ([PERSON] and [PERSON], 2000).
The algorithm uses a modification of the original BPD method developed by ([PERSON], [PERSON], and [PERSON], 2018) based on the idea that a circle created using a BP and its two nearest neighbours should not include any other points. The BPD method was further refined by ([PERSON] et al., 2022) who introduced a density-based threshold making the points-in-circle problem more robust to small outliers around the edge of a boundary, Figure 3. However, the threshold was tuned for extremely high-density simulated data, and therefore is not applicable to a construction grade laser scanner such as the RTC360, Figure 4. In addition, their work does not deal with holes in physical close proximity to each other. In the work presented here, the threshold is relaxed to work for a larger variation of point cloud densities.
Figure 2: White spherical 1.5” laser scanner target sitting in magnetic nest
In practice, the boundary points are detected by first computing the local resolution, \(\beta_{i}\), around the seed point. This is done by computing the mean of the distances between the seed point and all of its neighbours, \(\mu_{i}\) and the standard deviation of those distances, \(\sigma_{i}\) (Equation 1).
\[\beta_{i}=\mu_{i}+2\sigma_{i} \tag{1}\]
For every seed point, a circle is computed between it and every combination of two other points in its neighbourhood. The point is denoted as a potential boundary point if the radius of any created circle is larger than \(\beta_{i}\). This is the original BPD method developed in [11], which does not deal with outliers found in scanning data close to the edge of a scanned object where each laser spot may contain systematic biases due to its footprint extending beyond the surface boundaries. As seen in p\({}^{\text{a}}\) in Figure 3, a boundary point may contain points within its created circle. In order to deal with such data, a boundary point inclusion threshold, \(n_{BPD}\), is computed by comparing the number of points in the circle, \(c_{min}\) to the total number of points in the neighbourhood, \(K\) (Equation 2).
\[n_{BPD}=1-\frac{c_{min}}{K} \tag{2}\]
The \(n_{BPD}\) value is computed for each boundary point, and potential boundary points are kept if the \(n_{BPD}\) value is larger than the threshold. The threshold value in [11] was experimentally derived to be 0.95 based on the point cloud resolution and the qualities of the scanned surface. The neighbourhood size is not given.
The BPD method is successful in extracting the boundary points of the drilling template (Figure 4). However, the method is very sensitive to a change in the \(n_{BPD}\) threshold value, with a higher threshold meaning less points are detected, while a lower threshold means more points are detected as boundary points. Figure 4 demonstrates that in data from the RTC360 scanning in high resolution, the threshold used by [11] does not detect enough BPs to determine the outline of the holes in the drilling template. The outline of the drill holes as well as the edges of the drilling template become clearer as the threshold is decreased, meaning more points are permitted within the created boundary circle.
Due to the inherent presence of outliers and gaps when using point clouds, particularly those captured by a construction grade terrestrial laser scanner, it an be hypothesized that the more boundary points extracted, the better the estimate of the circle centre will be, up to a breaking point. In this dataset, a threshold of 0.7 included sufficient BPs to form the complete outline of each template hole without gaps. The continuity of the detected boundary points is key, as they need to be clustered in order to estimate the centres of each individual hole. If gaps within the outline of the circle were kept to a minimum, i.e. by extracting more points, a complete outline of the holes could be obtained.
Once boundary points are identified, a connectivity analysis is performed to group points belonging to individual holes. In a validation step, estimated hole centres are compared to the reference hole centres measured by the laser tracker. The only prior information required is the hole diameter and the minimum separation between holes in the drilling template.
## 4 Results and Analysis
### Registration
Once the spheres are automatically extracted from the scans, the scans are registered to the network of control points as measured by the laser tracker. This is done using a 7 parameter best-fit transformation between the sphere centres. Each scan is individually registered to the control network, meaning each spherical target centre has slightly different coordinates. Analysis of output discrepancy vectors is that there is no discernible systematic error in the registration process between 0 and 1 mm; the registered target centres are equally spread in 3D around the target centre measured by the laser tracker. However, it is important to note that small discrepancies occur.
The quality of the sphere fit can be quantified by computing the 3D difference between the laser tracked and the laser scanned sphere centres for the same target after the best fit transformation has been performed. A simplified registration case is presented in Figure 5, where twelve repeated scans were captured of a linear set of targets; the instrument was not moved between scans.
Figure 4: Boundary point differences for points-in-circle, \(n_{BPD}\), threshold
Figure 3: Boundary Point Detection, three holes create a circle [11]
Sphere fit results are presented in Figure 6. Note that the number of spheres extracted from each scan varies, with targets closer to the scanner successfully extracted more often. Since each scan was matched independently, discrepancies between each tracked and scanned sphere centre are present. This variation is seen in Figure 6, where the spread of the difference is shown for each target. The larger the spread, the larger the variation in the difference between sphere centres in each scan.
The quality of the automated sphere extraction depends on both the difference between the centres of the laser scanned and laser tracked sphere, and the ability of the algorithm to automatically extract the spheres from the point cloud. Given the same fit tolerance and parameters, the number of targets extracted from the point cloud decreased as the distance from the instrument increased. This can be seen for Targets 6 and 7 in Figure 6, where only 7 and 4 spheres were extracted, respectively. The best fitting results are for Targets 2 and 3, located 2.7 m and 3.9 m from the instrument, respectively, and with 11/12 targets automatically extracted. The majority of the spherical targets were consistently extracted at ranges of 2 to 4 m, and the variation between laser tracked sphere centres and laser scanned sphere centres was consistently of the order of 0.4 mm.
The spread of the difference in sphere fit for Targets 4 and 5 is large, ranging from 0.1 mm to 1.6 mm. However, the distribution of the differences for Targets 4 and 5 do not follow the same pattern. The minimum and maximum difference for Target 4 could be outliers as without those two points, the distribution and mean would be very similar to the distribution and mean of Target 3. However, the sphere fit differences for Target 5 are evenly spread with no obvious outliers, meaning the sphere fit repeatability begins to degrade around 7 m. The steep inclination angle at Target 1 likely affected the distribution of the sphere fit differences of this closest point.
Due to variation in the distribution of the targets, the registration result differed for the experiments conducted. However, performing a best-fit adjustment with sub-millimetre RMS error was consistently achieved for scanning volumes up to 10 m\({}^{3}\).
### Feature Extraction
The hole detection algorithm begins by using MSAC to extract a planar segment from the underlying wing skin (Figure 7).
Boundary points are then extracted using the method described in Section 3.2. A linear model-driven MSAC is used to remove the linear edges of the drilling template. Next, a key step is to be able to separate the extracted boundary points into groups of points belonging to individual holes in the drilling template. This is vital to be able to compute the centres of each individual hole. The procedure is similar to region growing, in that points are clustered as long as the distance between points is smaller than a threshold value, in this case the minimum distance between holes in the template, 3 mm (Figure 8).
Finally, a circle is fit to each group of extracted points and a hole centre estimated. This hole centre can then be compared to
Figure 5: Scanning set up to test automatic extraction and quality of sphere fit
Figure 8: Points grouped by connectivity, pinching visible on either side of some drill holes
Figure 6: 3D difference between laser tracked sphere centre and laser scanned sphere centre vs. distance from instrument. Number of times each target was automatically detected from the scan is shown beside target number
Figure 7: Planar segment (blue) detected from the point cloud (black), removes wing skin surface and interior of template holes
the hole centre measured by the laser tracker to validate the method (Figure 9 and Table 1).
The drilling templates were scanned from various ranges and incidence angles, in order to analyze the ideal orientation given the reflective nature of the sample brushed aluminum drilling template. Point cloud data were tested at their original resolution, and down sampled to 1 mm and 2 mm between points in the point cloud to evaluate the effect of density on the performance of the algorithm. Original point density was not consistent over the drilling template, but fluctuated around 0.3 mm on the section of the point cloud in question due to increasing distance of the template from the scanner location. The best performance of the algorithm in terms of minimizing the difference between the reference hole centres and the estimated hole centres was found to be from two scans at opposing oblique angles on either side of the drilling template, at a range of 2-3 m and incidence angle of 40-50 deg, with a mean difference of 0.16 mm. The increase in incidence angle minimized the reflections from ambient lighting, improving the completeness of the point clouds compared to scans taken at minimal incidence angles. In addition, the scans were captured from approximately the same height as the template but from two opposing incidence angles, cancelling out the bias that appears when scanning short cylinders from high incidence angles.
As seen in Table 1, the number of data points in the point cloud along with the number of points in the extracted plane and the number of boundary points detected reduces as the distance between points is increased. With a larger point cloud come longer processing speeds, but improvements in the mean difference between the reference hole centre and the hole centre estimated by the algorithm as well as the completeness of holes detected. Delta x and delta y values indicate the 2D direction of the difference between reference and estimated hole centres, in which an offset is present when using singular scans taken at oblique angles (Figure 9). The combination of the scans helps to reduce the directional offsets by increasing coverage, providing a complete representation of each template hole. It should be noted that scans from opposing incidence angles create a kind of self-compensation, and larger discrepancies would be expected from singular scans or from a wider distribution of drilling templates.
An example result from two scans is shown in Figure 9, where the reference (measured by the laser tracker) and estimated (from the point cloud captured by the laser scanner) hole centres and hole outlines are presented. The difference between the reference and estimated hole centres is quantified in Table 1 by the mean difference and delta x and y values. Additionally, the bias represented by the delta x and y values are presented in Figure 9, where the directional differences between reference and estimated hole centres have been multiplied by a factor of 50 for visualisation. It is clear from Figure 9 that the size of the difference vectors increases towards the right side of the drilling template, however, the direction of the difference vectors is not as consistent.
Despite the success of the hole detection algorithm, biased returns are prevalent when the scanner laser beam interacts with the inside of the template holes, seen in Figure 10 and Figure 11. The points captured inside the template hole do not describe the expected cylindrical shape. This is likely due to irregular reflections off the curved surface of the interior of the holes in the drilling template. When scanning from a high incidence angle, the incident laser bean can enter the hole, reflect off the interior wall, and continue downward rather than exiting the hole and returning to the scanner. Instead, the beam reflects multiple times until it hits either the bottom of the hole or some feature that reflects it back out again. This is sometimes referred to as optical 'rattle' and can cause path length extension.
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline Resolution (distance between points) & Original (-0.3 mm) & 1 mm & 2 mm \\ \hline Total Points & 97795 & 38281 & 12114 \\ Points in Plane & 40536 & 15605 & 3930 \\ BPs & 3432 & 2159 & 899 \\ Time (min) & 13.1 & 2.9 & 0.5 \\ Holes found & 12/12 & 12/12 & 6/12 \\ Mean difference (mm) & 0.160 & 0.238 & 1.538 \\ Delta x (mm) & 0.035 & -0.090 & -0.734 \\ Delta y (mm) & 0.028 & -0.006 & -0.725 \\ \hline \end{tabular}
\end{table}
Table 1: Sample results from two combined oblique scans
Figure 10: Hole signatures at various scan incidence angles
Figure 9: Circle centre discrepancies from two scans at
opposing incidence angles; laser tracker reference in blue, laser scanner estimated in red In addition to the curved shape, the brushed aluminium material likely impacted the returns from the drilling template. Figure 11 presents the surface of the drilling template (a) from which the holes were extracted in Table 1, as well as the returns from inside the holes in the template in Figure 11 (b), (c) and (d). These 'lobed' scan returns can be linked to the incidence angle of the scan location, as well as the optical ratio inside the scanned template holes. This is evident in Figure 11 (a) where 'pinching' can be seen on either side of the holes extracted in close proximity to the planar surface on the drilling template.
Despite the systematic bias in the laser scanner measurements, the developed method was able to extract and compute the centres of the holes in the drilling template. Variations in scan range and incidence angle affected the result, however, the algorithm was consistently and reliably able to detect hole centres to the sub-millimetre level.
## 5 Conclusion
This work explored a precise registration method for point clouds captured by terrestrial laser scanners in manufacturing environments. Spherical targets were automatically extracted and used to register point clouds into a network of control points set out by a laser tracker and USMN adjustment. Once registered, these point clouds can be used to deduce valuable information on manufacturing progress and production. The registered point clouds were used to extract hole centres from drilling templates on the surface of an aircraft wing. Coverage, range and incidence angle have an impact on the quality of the captured scans. Manufacturing specifications and tolerances will determine the necessary scan setup and resolution.
Biased returns were recorded when scanning the drilling template from oblique incidence angles. These are likely due to the reflective nature of the aluminium interior surfaces of the drilling templates, as well as the depth and shape of the template holes. Additionally, it must be considered that template hole edges wear over time, and any deviation from circularity, even at the sub-millimetre level could affect the performance of the algorithm. Future work will include the exploration of optimal scanning angles, material properties of the drilling template and the use of scanners of differing metric capability.
## References
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], and [PERSON]. 2021. \"Planning for Terrestrial Laser Scanning in Construction: A Review.\" _Automation in Construction_ 125 (May): 103551. [[https://doi.org/10.1016/j.autom.2021.103551](https://doi.org/10.1016/j.autom.2021.103551)]([https://doi.org/10.1016/j.autom.2021.103551](https://doi.org/10.1016/j.autom.2021.103551)).
* [PERSON] et al. (2011) [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON]. 2011. \"Assessment of Target Types and Layouts in 3D Laser Scanning for Registration Accuracy.\" _Automation in Construction_ 20 (5): 649-58. [[https://doi.org/10.1016/j.autom.2010.12.008](https://doi.org/10.1016/j.autom.2010.12.008)]([https://doi.org/10.1016/j.autom.2010.12.008](https://doi.org/10.1016/j.autom.2010.12.008)).
* [PERSON] and [PERSON] (1981) [PERSON], [PERSON], and [PERSON]. 1981. \"A Ransac-Based Approach to Model Fitting and Its Application to Finding Cylinders in Range Data.\" In _637-43.
* [PERSON] (2013) [PERSON]. 2013. _Automated/Mechanical Profiling and Counternsigning of Airframes_. Pennsylvania, USA: SAE International.
* [PERSON] and [PERSON] (2001) [PERSON], and [PERSON]. 2001. \"Extracting Cylinders in Full 3D Data Using a Random Sampling Method and the Gaussian Image.\" In _Vision Modeling and Visualization Conference 2001 (VMV-01)_. Stuttgart, Germany. [[https://hal.archives-ouvertes.fr/hal-01259641](https://hal.archives-ouvertes.fr/hal-01259641)]([https://hal.archives-ouvertes.fr/hal-01259641](https://hal.archives-ouvertes.fr/hal-01259641)).
* E57 Committee (2017) E57 Committee. 2017. \"Standard Test Method for Evaluating the Point-to-Point Distance Measurement Performance of Spherical Coordinate 3D Imaging Systems in the Medium Range.\" ASTM International. [[https://doi.org/10.1520/E3125-17](https://doi.org/10.1520/E3125-17)]([https://doi.org/10.1520/E3125-17](https://doi.org/10.1520/E3125-17)).
* [PERSON] and Bolles (1981) [PERSON], and [PERSON]. 1981. \"Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography.\" 1981. [[http://www.cs.itat.ac.th/~mdailey/creadings](http://www.cs.itat.ac.th/~mdailey/creadings)]([http://www.cs.itat.ac.th/~mdailey/creadings](http://www.cs.itat.ac.th/~mdailey/creadings))
[PERSON], [PERSON], and [PERSON]. 2019. \"A Model-Based Design System for Terrestrial Laser Scanning Networks in Complex Sites.\" _Remote Sensing_ 11 (15): 1749. [[https://doi.org/10.3390/rss11151749](https://doi.org/10.3390/rss11151749)]([https://doi.org/10.3390/rss11151749](https://doi.org/10.3390/rss11151749)).
* [PERSON] (2005) [PERSON]. 2005. \"Spectral Filtering and Classification of Terrestrial Laser Scanner Point Clouds.\" _The Photogrammetric Record_ 20 (111): 218-40. [[https://doi.org/10.1111/j.1477.930.2005.00321.x](https://doi.org/10.1111/j.1477.930.2005.00321.x)]([https://doi.org/10.1111/j.1477.930.2005.00321.x](https://doi.org/10.1111/j.1477.930.2005.00321.x)).
* [PERSON] and [PERSON] (2002) [PERSON], and [PERSON]. 2002. \"The Effects of Reflecting Surface Material Properties on Time-of-Flight Laser Scanner Measurements.\" In Ottawa, Canada.
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON]. 2019. \"Deep Learning on Point Clouds and Its Application: A Survey.\" _Sensors_ 19 (19): 4188. [[https://doi.org/10.3390/s19194188](https://doi.org/10.3390/s19194188)]([https://doi.org/10.3390/s19194188](https://doi.org/10.3390/s19194188)).
* [PERSON] (2019) [PERSON] 2019. \"Novel Method for Sphere Target Detection and Center Estimation from Mobile Terrestrial Laser Scanner Data.\" _Measurement_ 137 (April): 617-23. [[https://doi.org/10.1016/j.measurement.2019.02.025](https://doi.org/10.1016/j.measurement.2019.02.025)]([https://doi.org/10.1016/j.measurement.2019.02.025](https://doi.org/10.1016/j.measurement.2019.02.025)).
* [PERSON] and [PERSON] (1987) [PERSON], [PERSON], and [PERSON]. 1987. \"Finding Cylinders in Range Data.\" In _Proceedings. 1987 IEEE International Conference on Robotics and Automation_, 4:202-7. Raleigh, NC, USA: Institute of Electrical and Electronics Engineers. [[https://doi.org/10.1109/ROBOT.1987.1088007](https://doi.org/10.1109/ROBOT.1987.1088007)]([https://doi.org/10.1109/ROBOT.1987.1088007](https://doi.org/10.1109/ROBOT.1987.1088007)).
* [PERSON] et al. (1998) [PERSON], [PERSON], [PERSON], and [PERSON]. 1998. \"Faithful Least-Squares Fitting of Spheres, Cylinders, Cones and Tori for Reliable Segmentation.\" In _Computer Vision -- ECCV'98_, edited by [PERSON] and [PERSON], 1406:671-86. Lecture Notes in Computer Science. Berlin, Heidelberg: Springer Berlin Heidelberg. [[https://doi.org/10.1007/BF0055697](https://doi.org/10.1007/BF0055697)]([https://doi.org/10.1007/BF0055697](https://doi.org/10.1007/BF0055697)).
* [PERSON] and [PERSON] (2021) [PERSON], [PERSON], and [PERSON]. 2021. \"Robust Detection of Non-Overlapping Ellipses from Points with Applications to Circular Target Extraction in Images and Cylinder Detection in Point Clouds.\" _ISPRS Journal of Photogrammetry and Remote Sensing_ 176 (June): 83-108. [[https://doi.org/10.1016/j.isprsjprsjprs.2021.04.010](https://doi.org/10.1016/j.isprsjprsjprs.2021.04.010)]([https://doi.org/10.1016/j.isprsjprsjprs.2021.04.010](https://doi.org/10.1016/j.isprsjprsjprs.2021.04.010)).
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], and [PERSON]. 2018. \"Novel Algorithms for 3D Surface Point Cloud Boundary Detection and Edge Reconstruction.\" _Journal of Computational Design and Engineering_ 6 (1): 81-91. [[https://doi.org/10.1016/j.jcde.2018.02.001](https://doi.org/10.1016/j.jcde.2018.02.001)]([https://doi.org/10.1016/j.jcde.2018.02.001](https://doi.org/10.1016/j.jcde.2018.02.001)).
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], and [PERSON]. 2018. \"Robust Statistical Approaches for Circle Fitting in Laser Scanning Three-Dimensional Point Cloud Data.\" _Pattern Recognition_ 81 (September): 417-31. [[https://doi.org/10.1016/j.patocg.2018.04.010](https://doi.org/10.1016/j.patocg.2018.04.010)]([https://doi.org/10.1016/j.patocg.2018.04.010](https://doi.org/10.1016/j.patocg.2018.04.010)).
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON]. 2019. \"Robust Cylinder Fitting in Laser Scanning Point Cloud Data.\" _Measurement_ 138 (May): 632-51. [[https://doi.org/10.1016/j.measurement.2019.01.095](https://doi.org/10.1016/j.measurement.2019.01.095)]([https://doi.org/10.1016/j.measurement.2019.01.095](https://doi.org/10.1016/j.measurement.2019.01.095)).
* [PERSON] and [PERSON] (2005) [PERSON], [PERSON], and [PERSON]. 2005. \"Efficient Hough Transform for Automatic Detection of Cylinders in Point Clouds.\" _ISPRS WG III/3, III/4, IV/3 Workshop \"Laser Scanning 2005_, \" September, 60-65.
* [PERSON] and [PERSON] (1993) [PERSON], [PERSON], and [PERSON]. 1993. \"Extracting Geometric Primitives.\" _CVGIP: Image Understanding_ 58 (1): 1-22.
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON]. 2017. \"Analysis and 3D Inspection System of Drill Holes in Aeronautical Surfaces.\" In, edited by [PERSON], [PERSON], and [PERSON], 1032948. Munich, Germany. [[https://doi.org/10.1117/12.270302](https://doi.org/10.1117/12.270302)]([https://doi.org/10.1117/12.270302](https://doi.org/10.1117/12.270302)).
* [PERSON] (1982) [PERSON] [PERSON]. 1982. \"Optimization of Geodetic Networks.\" _Reviews of Geophysics_ 20 (4): 877-84. [[https://doi.org/10.1029/RG020i004p00877](https://doi.org/10.1029/RG020i004p00877)]([https://doi.org/10.1029/RG020i004p00877](https://doi.org/10.1029/RG020i004p00877)).
* [PERSON] et al. (2003) [PERSON] [PERSON], [PERSON], and [PERSON]. 2003. \"View Planning for Automated Three-Dimensional Object Reconstruction \"
* Spatial Analyzer (2021) Spatial Analyzer User Manual.\" Hexagon Metrology, inc.
* [PERSON] et al. (2022) [PERSON], [PERSON], [PERSON], and [PERSON]. 2022. \"Tiny Hole Inspection of Aircraft Engine Nacelle in 3D Point Cloud via Robust Statistical Fitting.\" _Measurement_ 196 (June): 111250. [[https://doi.org/10.1016/j.measurement.2022.111250](https://doi.org/10.1016/j.measurement.2022.111250)]([https://doi.org/10.1016/j.measurement.2022.111250](https://doi.org/10.1016/j.measurement.2022.111250)).
* [PERSON] (1991) [PERSON] 1991. \"Estimation of Planar Curves, Surfaces, and Nonplanar Space Curves Defined by Implicit Equations with Applications to Edge and Range Image Segmentation.\" _IEEE Transactions on Pattern Analysis and Machine Intelligence_ 13 (11): 1115-38. [[https://doi.org/10.1109/34.103273](https://doi.org/10.1109/34.103273)]([https://doi.org/10.1109/34.103273](https://doi.org/10.1109/34.103273)).
* [PERSON] and [PERSON] (2000) [PERSON], and [PERSON]. 2000. \"MLESAC: A New Robust Estimator with Application to Estimating Image Geometry.\" _Computer Vision and Image Understanding_ 78 (1): 138-56. [[https://doi.org/10.1006/civu.1999.0832](https://doi.org/10.1006/civu.1999.0832)]([https://doi.org/10.1006/civu.1999.0832](https://doi.org/10.1006/civu.1999.0832)).
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON], and [PERSON]. 2015. \"Extraction of Cylinders and Estimation of Their Parameters from Point Clouds.\" _Computers & Graphics_, Shape Modeling International 2014, 46 (February): 345-57. [[https://doi.org/10.1016/j.cag.2014.09.027](https://doi.org/10.1016/j.cag.2014.09.027)]([https://doi.org/10.1016/j.cag.2014.09.027](https://doi.org/10.1016/j.cag.2014.09.027)).
* [PERSON] and [PERSON] (2009) [PERSON], and [PERSON]. 2009. \"Effects on the Measurements of the Terrestrial Laser Scanner HDS 6000 (Leica) Caused by Different Object Materials.\" In. Paris, France: IMPRS.
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON]. 2021. \"Density-Invariant Registration of Multiple Scans for Aircraft Measurement.\" _IEEE Transactions on Instrumentation and Measurement_ 70: 1-15. [[https://doi.org/10.1109/TIM.2020.3016410](https://doi.org/10.1109/TIM.2020.3016410)]([https://doi.org/10.1109/TIM.2020.3016410](https://doi.org/10.1109/TIM.2020.3016410)).
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON]. 2014. \"Automatic Registration of Laser Point Cloud Using Precisely Located Sphere Targets.\" _Journal of Applied Remote Sensing_ 8 (1): 083588. [[https://doi.org/10.1117/1.RJS.88.083588](https://doi.org/10.1117/1.RJS.88.083588)]([https://doi.org/10.1117/1.RJS.88.083588](https://doi.org/10.1117/1.RJS.88.083588)).
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON]. 2021. \"Investigation of Point Cloud Registration Uncertainty for Gap Measurement of Aircraft Wing Assembly.\" In _2021 IEEE 8 th International Workshop on Metrology for AeroSpace (MetroAeroSpace)_, 164-69. [[https://doi.org/10.1109/MetroAeroSpace1421.2021.9](https://doi.org/10.1109/MetroAeroSpace1421.2021.9)]([https://doi.org/10.1109/MetroAeroSpace1421.2021.9](https://doi.org/10.1109/MetroAeroSpace1421.2021.9)) 511727.
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON]. 2015. \"Automated Registration of Multi-View Point Clouds Using Sphere Targets.\" _Advanced Engineering Informatics_ 29 (4): 930-39. [[https://doi.org/10.1016/j.aei.2015.09.008](https://doi.org/10.1016/j.aei.2015.09.008)]([https://doi.org/10.1016/j.aei.2015.09.008](https://doi.org/10.1016/j.aei.2015.09.008)).
|
isprs
|
REGISTRATION AND FEATURE EXTRACTION FROM TERRESTRIAL LASER SCANNER POINT CLOUDS FOR AEROSPACE MANUFACTURING
|
K. Pexman, S. Robson
|
https://doi.org/10.5194/isprs-archives-xlviii-2-w2-2022-119-2022
| 2,022
|
CC-BY
|
isprs/734cc95a_29b8_49d7_a458_1049fb904072.md
|
# 3D modeling of the archaic AMPHORAS of IONIA
[PERSON]\({}^{\,\ast\,\ast}\)
[PERSON]\({}^{\,\
atural}\)
\({}^{\ast}\) Istanbul Bilgi University, 34060 Eyup Istanbul, Turkey
[EMAIL_ADDRESS]
\({}^{\
atural}\) Seleuk University, Konya, Turkey
[EMAIL_ADDRESS]
###### Abstract
Few other regions offer such a rich collection of amphoras than the cities of Ionia. Throughout history amphoras of these cities had been spread all over the Mediterranean. Despite their common characteristics, amphora manufacturing cities of Ionia had their own distinctive styles that can be identified. They differed in details of shape and decoration. Each city produced an authentic type of amphora which served as a trademark of itself and enabled its attribution to where it originated from. That's why, amphora provide important insight into commerce of old ages and yield evidence into ancient scaling routes. Owing to this our knowledge of the ancient trade is profoundly enriched. The following is based on the finds of amphora which originated from the Ionian cities of Chios, Clazomenai, Lesbos, Miletus, and Samos. Starting from city-specific forms which offer interpretative advantages in provenance, this article surveys the salient features of the regional forms and styles of the those Ionian cities. 3D modeling is utilized with the aim of bringing fresh glimpess of the investigated amphors by showing how they originally looked. Due to their virtual indestructibility these models offer interpretative advantages by enabling experimental testing of hypotheses upon the finds without risking them. The 3D models in the following sections were reconstructed from numerous fragments of necks, handles, body sherds and bases. They convey in color- unlike the monochrome drawings which we were accustomed to-the texture, decoration, tint and the vitality of the amphors of Ionia.
Amphoras, Archaic Period, Ionia, 3D Modeling +
Footnote †: Corresponding author
## 1 Introduction
Over a period of three centuries, during the Archaic Period (800-480 BC), Ionian cities were among the most prolific centers of Amphora production. These centers not only produced large number of amphora but also gave them different styles which is of interest in revealing geographical and chronological traces. The streamlined figures of amphors with two handles alongside the neck and pointed bottoms have been among of the best recognised shapes of the world since the first quarter of the 7\({}^{\text{th}}\) century BC.
This shape was imposed by its functionality and it was what gave the vessel its well known name. The name amphora is derived from the Greek word _amphoreus_ (\(\mathfrak{a}\mathfrak{u}\mathfrak{o}\mathfrak{o}\mathfrak{e}\mathfrak{e}\mathfrak{e}\mathfrak{e}\mathfrak{e}\)) which means a container with two carrying handles on both sides. The handles and the pointed tip were there to allow robust carrying, easy pouring and decanting the contents of the jar. Despite these common characteristics, amphora manufacturing cities of Ionia had their own distinctive styles that a trained eye can discern without much toil.
Most amphora (excluding the grey vessels of Lesbos) shared the common fabric of reddish brown clay and had the same specific outline in appearance. Notwithstanding these common features they differed in details of shape and embellishment. These differences stemmed from local stylistic choices, as well as incentives for distinction from the vessels of the other cities. Working clay into an amphora involved balancing various local values and priorities. Each center fabricated an authentic type of amphora which served as a trademark of its city and enabled its attribution to where it originated from.
This article surveys the salient features of the regional forms and styles of the Ionian cities of Chios, Clazomenai, Lesbos, Miletus, and Samos (Figure 1), in the archaic period. The work addresses the analyses based on city- specific forms which offer interpretative advantages in provenance. Understanding the details of these forms casts light on the distinguishing values and priorities of the producer community. It then goes on to establish the geographical and chronological differences through modeling these amphors by using 3D visualization techniques. The 3D models which we believe to be the most faithful reconstructions of the surveyed amphors are the outcomes of a multidisciplinary teamwork which span the fields of underwater archaeology and computer graphics.
Firm establishment of these differences is important to nautical archaeology. Through the years amphora's littered the Aegean sea bottom. An amphora on the seabed signifies the identity of the home port of a wreck based on its typology. In addition, amphors of known provenance are good indicators of who was trading with whom. Moreover, the physical robustness of amphora make them one of the most consistently preserved objects in the archaeological records. Their widespread survival means we can hope to implement them as tracers of the elusive social and economic lives of the ancient civilizations
## 2 City Specific Form and Style
### Significance of the City-Specific Form and Style
Thanks to modern underwater archaeological expeditions, amphores have been discovered in large numbers not only at the coasts of Chios, Clazomenai, Lesbos, Miletus, and Samos, but all over the Mediterranean. As more amphores workshop became known we had ample evidence that these Ionian cities were the major amphora production centres over the span of the Archaic Period. While more amphores were discovered and more different amphores types were isolated the regional styles of these cities could be identified in detail and truly specific relationships between the shape of the amphora and the fabricating city could be established with more confidence.
Studies of these amphores usually lead to a direct connection between the shape of the amphora and its city of origin. The figure and style of the amphora was like the badge or trademark of the city. They were frequently illustrated on coins of cities such as Chios and Samos. Their appearance on the coins of the cities may be taken as the latitude of the civic interest in the amphora production in these centers. The link between the city and the amphora type is further emphasized by the emergence of specific names given to amphores in the Ionian region. Thus Chios, Clazomenai, Lesbos, Miletus, and Samos were used to refer to particular jars which originated from these cities. These containers emerged at the beginning of the Archaic Period and reached their distinctive form over the span of this era.
## 3 3D Modeling of Amphores
The prodigious power of 3D modeling in helping us to reconstruct and visualize a new the artefacts that the archaeologists had recovered has already been well recognized ([PERSON], [PERSON], 1997). In this article 3D modeling is utilized with the aim of bringing fresh glimpses of the investigated amphores by showing how they originally looked. Due to their virtual indestructibility these models offer interpretative advantages by enabling experimental testing of hypotheses upon the finds without risking them.
The 3D models in the following sections were reconstructed from numerous fragments of necks, handles, body sherds and bases. They convey in color -unlike the monochrome drawings which we were accustomed to- the texture, decoration, tint and the vitality of the amphores of Ionia.
## 4 Amphores of Ionia
Few other regions offer such a rich collection of amphores than the cities of Ionia. Throughout history they have been spread all over the Mediterranean. The archaeological information that can be harnessed from this wealth is impressive. The following is based on the finds of the cities of Chios, Clazomenai, Lesbos, Miletus, and Samos. The detailed information about the cited amphora pieces is provided in Appendix.
### Chian and Clazomenian Amphores
First examples of a unique Ionian amphore type began to appear in the second half of the 7\({}^{\text{th}}\) century BC. They had originated from the workshops of either Chios or Clazomenai. Throughout the archaic period both Chios and Clazomenai were the most prolific amphora production centres of Ionia. A number of vessels had been extracted from sea bottom or unearned which are dated to a span of time from mid 7\({}^{\text{th}}\) century BC to early 5\({}^{\text{th}}\) century BC. These vessels kept their salient typological and decorative characteristics over two centuries which made them recognizable without coil. However, these characteristics had undergone some variations over the time. The manufacturers in both cities seem to have had spent efforts to create amphores styles which help identify the city where they came from. Subsequent to the discovery of a unique amphora type on a Chian coin [PERSON] ([PERSON], 1979) identified a group of ovoid shaped clay jars as Chian amphores at the beginning of 1930's. This type is known for the \(\blacktriangle\) motives on the shoulder. Figure 2 shows the 3D recoconstruction a Chian amphora which is dated to 650 BC. It has a bulbus belled body decorated with four horizontal glaze bands. Three of these bands stay closer to each other and form a group just under the \(\blacktriangle\) motive, whereas the fourth one stands alone in the middle of the belly.
Figure 1: Ionian centers of amphora production in the Archaic Period
Figure 2: 3D model of a Chian amphora from 650 BC.
Its arched handles are attached to the body as well as the squat neck. A glaze circle band encircles each junction. Most of the Chian amphoras of this period had glaze bands around their rims as well as along the neck-shoulder binding. In some cases there were even diagonal glaze bands or intersecting diagonal glaze bonds. Figure 3 shows a group of Chian amphoras from 650-600 BC.
Clazomenia was an important port of wine trading. Clazomenian amphora manufactures in the second half of the 7\({}^{th}\) century used the same decorative schemes as Chians. Similar embellishment themes of \(\blacktriangleup\) motives on the shoulder, and horizontal glaze stripes around the rim, shoulder and belly were also implemented by Clazomenian masters. (Figure 4). This made it difficult to separate one from another. However, most of the times these stripes were broader than the Chian counterparts. That's why some authors called them as 'amphoras with broad bands', ([PERSON], 1960).
Allied to this close resemblance, the earlier pieces of recovered Clazomenian amphoras were used to be attributed to Chios. However, [PERSON] isolated Clazomenai as a different center of amphora production ([PERSON], 1998). [PERSON]'s findings were reinforced by [PERSON]'s discoveries ([PERSON], 1991) and Clazomenai was established conclusively as a separate center of Ionian amphora production in the Archaic Period. In his book [PERSON] ([PERSON], 2012) provided a catalogue of these amphoras.
Starting from the the 6\({}^{th}\) century B.C. Chian and Clazomenian amphoras had begun to drift away from each other. Afterwards, Chian and Clazomenian products could easily be separated from each other. Clazomenians ceased using the \(\blacklozenge\) figure on the shoulders of their vessels, from then on \(\blacklozenge\) figure was only implemented by Chians as an identification mark. Although Chians had kept \(\blacklozenge\) figure they made other distinctive changes in both shape and decoration.
Squat necks and bulbous bellies were discontinued, and a new type of Chian amphora was created with a slender and tall body (Figure 6). In this new type, a round rim sits over a long neck which is flanked with stirrup amphora handles.
A Clazomenian product of the same period is shown in Figure 7. It exhibits a pleasing form. A round mushroom-shaped rim stands on a semi-long neck which flares out towards the shoulder. The body widens down the shoulder with a gentle slope and has a depression below the belly towards a short stem toe. The body is decorated with two pairs of dark red glaze bands. The one closest to the toe is wider than the other three.
Figure 4: 3D model of a Clazomenian amphoras from 650 BC.
Figure 5: A group of Clazomenian amphoras
Figure 3: A group of Chian amphoras
from 650-600 BC.
Figure 6: A Chian amphora with a slender and tall body
from 575-550 BC.
### Lesbian Amphoras
Lesbos Island was another important center of amphora production in the archatic period. Amphora production started in Lesbos in the 7\({}^{\text{th}}\) century, contemporarily with Chios and Clazomenia. However, contrary to the former two types whose similarity caused confusion, Lesbian amphors had an unusual feature which rendered them easily identifiable: unlike the Chian and Clazomenia examples which were predominantly of warmer earth colors, the color of some Lesbian amphoras was grey. Since no examples of red Lesbian amphoras were encountered at the beginning, one can presume that amphora production in Lesbos had started with grey amphors.
Earliest red Lesbian amphoras did not appear before the last quarter of the 8\({}^{\text{th}}\) century BC. From 6\({}^{\text{th}}\) century onwards both grey and red Lesbian amphors followed the same lines of development in shape. Red Lesbian amphoras ceased at the end of the 5\({}^{\text{th}}\) century BC. However, grey Lesbian amphors continued until the 3\({}^{\text{th}}\) century BC. Figure 8 depicts a red Lesbian amphora from an epoch between the last quarter of the 6\({}^{\text{th}}\) century and the 1\({}^{\text{st}}\) quarter of the 5\({}^{\text{th}}\) century.
Excluding its common red color, it displays other distinguishing features of a Lesbian amphora : a distinct shape in the form of Greek letter 'phi', 'bulged necks, cylindrical handles with \"rat tail\" like relief line, and a bulbous belly which ends in an elongated toe. Lower body steeply narrows towards the toe.
The cane-shaped handles seen in this figure are slightly pushed towards the neck before being attached to the shoulders. The red tail relief starts immediately above this attachment point. Another distinguishing feature of this amphora is its rather tall neck.
The grey amphora which started to take place in the scene of commerce as of the middle of the 7\({}^{\text{th}}\) century continued to dominate the markets till the middle of the 5\({}^{\text{th}}\) century.
### Milesian Amphoras
Miletus was one of the leading centers of Ionia. Thanks to its privileged location, it was an important center of commerce in the archatic period. The city was wealthy, its prosperity was primarily due to agricultural products. Oil and wine production played a profound role in city's prosperity. As evidenced by widespread Milesian amphoras in the Eastern Mediterranean, Miletus was also a leading pottery manufacturer and an important hub of amphora production.
Like the amphora producers of the former Ionian cities, producers of Miletos also spent effort in order to create a distinct amphora form which would make it easy to identify the vessel as a Milesian amphora.
Excavations had revealed a number of amphoras which resembled those from the island of Samos. They were characterized by the distinguished feature of thickened rim lips. [PERSON] ([PERSON], 1971) isolated this type as 'Milesian'. Laboratory analyses confirmed that [PERSON] was right. Milesian amphora had indeed a shape of its own which evolved separately from the Samian shape over a long period. This shape is above all characterized by a thick lip surrounding the rim, and ridges on transition from rim to neck. In majority of the Milesian examples there is only one rim, however examples with two or three ridges had also been found. In addition, there is a ridge or groove on neck-shoulder transition. Miletus has also examples with decorative dark red stripes (Figure 10).
Figure 8: A red Lesbian Amphora which is dated between the last quarter of the 6\({}^{\text{th}}\) century and the 1\({}^{\text{st}}\) quarter of the 5\({}^{\text{th}}\) century.
Figure 7: A Clazomenian amphora from the same period
Figure 9: Two red and two grey Lesbian amphoras from the period 7\({}^{\text{th}}\)-5\({}^{\text{th}}\) century BC
### Samian Amphoras
We owe the first data about Samian amphoras to [PERSON] ([PERSON], 1886, 1888, 1909) and [PERSON] ([PERSON]). However, the credit also goes to [PERSON] ([PERSON], 1960) for the identification of 'Samian' as a separate type. She isolated this type with a protruding cornice rim, cylindrical neck, ovoid body and bevelled ring foot as 'Samian amphora'. [PERSON] ([PERSON], 1971) published a comprehensive study about Samian amphoras. The typological classification which was put forward in [PERSON]'s work is still met with universal acceptance. Figure 12 shows a Samian amphora from 550-500 BC.
A tendency towards a narrower and taller type can be seen here compared to the earlier types which are shown in Figure 13.
## 5 Conclusion
Numerous amphoras have been found at the coasts of Chios, Clazomenai, Lesbos, Miletus, and Samos which provide important insight into ancient maritime activity in this region and furnish evidence into ancient sailing routes. Owing to these finds our knowledge of the ancient maritime trade and harbors have been conspicuously enriched. Crucial clues provided by them are not limited to maritime trade. Shape, size and weights of the discovered amphoras yield attributes of the vessels, as well. They also have stories to tell about the shipwcks.
Additionally, there was copious material in this region for tracing the development of amphoras from the viewpoint of craftsmanship. Finds make it possible to follow evolution in the shapes of many classes of amphoras.
Underwater cultural heritage of the waters in this part of the world are facing new risks because of industry, tourism and some of SCUBA diving activities like many of other countries. Recognizing the urgent need to preserve and protect such heritage, creating scientific data of the cultural values of these depths becomes more and more important. The possibilities that are increasingly being created by advances in 3D modelling are opening new windows for the creation and preservation of the data pertaining to the ancient maritime activity. As the technological progress strides on, we are able to reconstruct and achieve visualization of the amphoras as in their original appearances.
In this work, 3D models of the archatic amphors of Ionia which reproduce the virtual aspects, as objects of archatic maritime life in Ionia, are presented. The presented examples include 3D models of amphoras from Chios, Clazomenai, Lesbos, Miletus, and Samos, from 650 BC to 450 BC. These results show that 3D modelling is a much more complete methodology of reconstructing and recording the past than traditional methodologies of photography and drawing.
## Acknowledgements
Authors gratefully acknowledge the support of Izmir Archaeology Museum, and Sadberk Hanim Museum. Thanks
Figure 11: Milesian amphoras from 7\({}^{\text{th}}\)-5\({}^{\text{th}}\) century BC.
Figure 12: A Samian amphora from 550-500 BC.
Figure 10: A Milesian amphora from 6\({}^{\text{th}}\) century BC.
Figure 13: A group Samian amphors from 7\({}^{\text{th}}\)-5\({}^{\text{th}}\) century BC.
are also due to [PERSON] for his contribution in 3D graphics.
## References
* [PERSON] (2009) [PERSON], 2009. Drei Typen archaischer Reifenamphoren aus Milet, _AA_ 2009/1, 121-134.
* [PERSON] (1889) [PERSON],1889. _Aus Ionischen und Italianischen Nekropolen_, Leipzig.
* [PERSON] (1991) [PERSON], 1991, The Pre-Classical Wreck at Campese Bay, Island of Giglio, Second Interim Report 1983 Season, [PERSON] (ed.), _Studi e Materiali.Scienza dell' antichita in Toscana_, Vol. VI, [PERSON], Roma, 181-198.
* [PERSON] (1938) [PERSON],1938, A Well of the Black-Figured Period at Corinth, _Hesperia_ 7-4, 557-611.
* [PERSON] (1998) [PERSON], [PERSON] 1998. _East Greek Pottery_, London/ New York.
* [PERSON] (1988) [PERSON], 1988. _Klazomenai Kazasimdaki Arkaik Donem Ticari Amforlar_, Ph. D. Thesis, Izmir.
* [PERSON] (1991) [PERSON], 1991. Antick Cagda Amforlar, Izmir.
* [PERSON] (1982) [PERSON], 1982. Amphores commerciales archaichques de la Greee de l'Est, _PP_ 37, 193-209.
* [PERSON] (1997) [PERSON], [PERSON], 1997. _Virtual Archaeology: Re-creating Ancient Worlds_, Harry N. Abrams Inc., Spain.
* [PERSON] (2007) [PERSON], Pottery from various parts of Cyprus, Greek Geometric and Archaic Pottery Found in Cyprus, Stockholm, 23-60.
* [PERSON] (1971) [PERSON], 1971. [PERSON], _Hesperia_ 40,
* [PERSON] (1979) [PERSON], 1979, _Amphoras and the Wine Trade_, Excavations of the Athenian Agora, Picture Book, No.6, Revised Edition, American School of Classical Studies at Athens.
* [PERSON] et al (2007) [PERSON] et al, 2007, Excavations at Azorio, 2003-2004, Part I, The archaic civic complex, _Hesperia_ 76, 243-321.
* [PERSON] (1995) [PERSON], 1995, Greek Imports of Archaic and Classical Times in Colchis, _AA_, 63-73.
* [PERSON] (1962) [PERSON], 1962, Chronique des fouilles et decouretes archeologiques a Chypre en 1961, _BCH_ 86, 327-414.
* [PERSON] (1970) [PERSON], 1970, Chronique des fouilles et decouretes archeologiques a Chypre en 1968, _BCH_ 93, 431-569.
* [PERSON] (1980) [PERSON], 1980. Porselenie Nadlimanskoe II na beregu Dnestrovskogo limana, Issledovania po antichnoi arecheologii yugo- Zapada Ukrainskoi SSR:sbornik nauchnykh trudov, Kiev, 5-23.
* [PERSON] (1886) [PERSON], 1886. _NaukPartis I_, London.
* [PERSON] (1986) [PERSON], [PERSON], 1986. The Stoa gutter well, a late archaic deposit in the Athenian Agora, _Hesperia_ 55, 1-72.
* [PERSON] (1990) [PERSON], 1990. _Complessi tombali dall'Ebruria meridionale, Le anforte da transporte o i ell ammerico etnvaco arcaico I_, Rome.
* [PERSON] (2004) [PERSON], 2004, The archaic cemetery of the Clazomenian Colony of Abdera, Klazomenai, Teos and Abdera-Metropolis and Colony. _Proceedings of the International Symposium held at the Archaeological Museum of Abdera_, 2001,Thessaloniki, 249-259.
* [PERSON] (2012) [PERSON], 2012. Arakaik Donem Ionia Uretimi Ticari Amforlar, Ege yaymilran.
* [PERSON] (2008) [PERSON], 2008. Greek Colonisation of the Northern Aegean, [PERSON]. (ed.) _Greek colonization An account of Greek Colonies and Other Settments Overseas_, Vol. 2, Mnemosyne, Supplementa 193, Boston, 1-54.
* [PERSON] (1960) [PERSON], 1960. Keramicheskaia tara Bosfora, MIA 83.
## Appendix
Data about each of the Ionion amphoras included in this paper are provided below, dimensions are given in metric system.
### A.1 Chian Amphoras
**Chi 1 (Fig. 2)**
Diameter of rim: 13.9 cm
Height: 60.0 cm
Height of neck: 9.8 cm
Diameter of belly: 39.0 cm
Depth of foot: 0.5 cm
Date: 650 BC
Find site: ercureeri, Banditacia Necropolis Source: [PERSON]
**Chi 2 (Fig. 6)**
Diameter of rim: 11.8 cm
Height: 66.0 cm
Height of neck: 15.0 cm
Diameter of belly: 24.9 cm
Date: 575-550 BC
Find site: Marion, Chrysochou
Source: [PERSON]
**Chi 3 (Fig. 3)**
Diameter of rim: 11.8 cm
Height: 70.0 cm
Height of neck: 14.0 cm
Diameter of belly: 32.9 cm
Diameter of belly: 14.2 cm
Diameter of belly: 32.9 cm
Date: Tate Tate Tate-Early Eth century BC
Location: [PERSON] collection, Cyprus
Source: [PERSON]
**Chi 4 (Fig. 3)**
Diameter of rim: 13.0 cm
Height: 70.0 cm
Height of neck: 13.3 cm
Diameter of belly: 36.4 cm
\begin{tabular}{l l l} Date: & Late 7 th-Early 6 th century BC & **Cla 5 (Fig. 7)** & 10.0 cm \\ Find Site: & Bayrakl, Izmir & Diameter of rim: & 11.5 cm \\ Source: & Izmir Archaeology Museum & Height: & 62.5 cm \\
**Chi 5 (Fig. 3)** & Height of neck: & 14.5 cm \\ Diameter of rim: & 40.0 cm & Diameter of belly: & 38.5 cm \\ Diameter of rim: & 650 - 620 BC & Diameter of float: & 520 - 480 BC \\ Height of neck: & 14.3 cm & Find Site: & Athenian Agora, Attolas Stoa \\ Diameter of belly: & 25.8 cm & Source: & Roberts and Glock, 1986 \\ Date: & 575-550 BC & & \\ Location: & Marion & **Cla 6 (Fig. 5)** & \\ Source: & [PERSON] & Diameter of rim: & 12.5 cm \\ & & Height: & 61.0 cm \\
**Chi 6 (Fig. 3)** & Height of neck: & 9.7 cm \\ Diameter of rim: & 12.2 cm & Diameter of belly: & 34.5 cm \\ Height: & 76.5 cm & Depth of foot: & 2.8 cm \\ Height of neck: & 14.9 cm & Date: & 550 - 490/480 BC \\ Diameter of belly: & 31.0 cm & Find Site: & Salamis \\ Date: & 490 BC & Source: & [PERSON] \\ Find Site: & Corinth & \\ Source: & [PERSON] & & \\ \end{tabular}
### A.2 Clazomenian Amphors
\begin{tabular}{l l l} & **Les 1 (Fig. 8)** & \\ Diameter of rim: & 9.0 cm \\ Height: & 47.0 cm \\ Height of neck: & 14.5 cm \\ Diameter of belly: & 22.1 cm \\ Color: & Red \\ Date: & Late 6 th century- Early 5 th century BC \\ Depth of foot: & 0.5 cm & Find site: & Nadlimanskoe \\ Date: & 650 BC & Source: & [PERSON] \\ Find Site: & Abdera Necropolis & \\ Source: & [PERSON] and [PERSON], 2008 & & \\
**Cla 2 (Fig. 5)** & & \\ Diameter of rim: & 17.0 cm & Height: & 47.0 cm \\ Height: & 64.5 cm & Height of neck: & 10.3 cm \\ Height of neck: & 15.0 cm & Diameter of belly: & 26.7 cm \\ Diameter of belly: & 41.5 cm & Color: & Red \\ Depth of foot: & 0.5 cm & Date: & 5 th century BC \\ Date: & 650 - 620 BC & Find site: & Pichnavaria Necropolis \\ Find Site: & Clazomenai, Kalabak II Necropolis & Source: & [PERSON], 1995 \\ Source: & [PERSON], 1988 & & \\
**Cla 3 (Fig. 5)** & & **Les 3 (Fig. 9)** & \\ Diameter of rim: & 18.5 cm & Diameter of rim: & 11.5 cm \\ Height: & 62.5 cm & Height: & 62.5 cm \\ Height of neck: & 14.5 cm & Height of neck: & 14.9 cm \\ Diameter of belly: & 44.0 cm & Diameter of belly: & 38.5 cm \\ Date: & 650 - 620 BC & Depth of foot: & 0.5 cm \\ Find Site: & Clazomenai, Kalabak II Necropolis & Color: & Grey \\ Source: & [PERSON], 1988 & Date: & 550-500 BC \\ & & Find site: & Unknown \\
**Cla 4 (Fig. 5)** & & **Source:** & Sadbeck Hanim Museum \\ Diameter of rim: & 12.8 cm & & \\ Height: & 67.0 cm & **Les 4 (Fig. 9)** & \\ Height of neck: & 9.3 cm & Diameter of rim: & 12.0 cm \\ Diameter of belly: & 40.0 cm & Height: & 66.7 cm \\ Depth of foot: & 2.4 cm & Height of neck: & 13.5 cm \\ Width of stripe: & 2.5 cm & Diameter of belly: & 41.2 cm \\ Date: & 525 - 500 BC & Dept of foot: & 1.5 cm \\ Find Site: & Clazomenai, Yildiztepe Necropolis & Date: & 600-550 BC \\ Source: & [PERSON], 1988 & Find site: & Bayrakli,Izmir \\ Source: & [PERSON], 1988 & Source: & Izmir Archaeology Museum \\ \end{tabular}
|
isprs
|
3D MODELING OF THE ARCHAIC AMPHORAS OF IONIA
|
A. Denker, H. Oniz
|
https://doi.org/10.5194/isprsarchives-xl-5-w5-85-2015
| 2,015
|
CC-BY
|
isprs/04f58f7d_30d4_4529_8a92_7ed96b87ee6c.md
|
Exploitation of Digital Surface Models Generated from WorldView-2 Data for Sara Simulation Techniques
[PERSON]\({}^{*}\), [PERSON]\({}^{\flat}\), [PERSON]\({}^{\flat}\)
Institute of Photogrammetry and Remote Sensing, Karlsruhe Institute of Technology (KIT), Englerstr. 7, 76131 Karlsruhe, Germany -[EMAIL_ADDRESS]
\({}^{\flat}\)Remote Sensing Technology Institute, German Aerospace Center (DLR), 82234 Oberpfaffenhofen, Germany -(stefan.auer, pablo.angelo)@dlr.de
###### Abstract
GeoRaySAR, an automated SAR simulator developed at DLR, identifies buildings in high resolution SAR data by utilizing geometric knowledge extracted from digital surface models (DSMs). Hitherto, the simulator has utilized DSMs generated from LiDAR data from airborne sensors with pre-filtered vegetation. Discarding the need for pre-optimized model input, DSMs generated from high resolution optical data (acquired with WorldView-2) are used for the extraction of building-related SAR image parts in this work. An automatic preprocessing of the DSMs has been developed for separating buildings from elevated vegetation (trees, bushes) and reducing the noise level. Based on that, automated simulations are triggered considering the properties of real SAR images.
Locations in three cities, Munich, London and Istanbul, were chosen as study areas to determine advantages and limitations related to WorldView-2 DSMs as input for GeoRaySAR. Beyond, the impact of the quality of the DSM in terms of building extraction is evaluated as well as evaluation of building DSM, a DSM only containing buildings. The results indicate that building extents can be detected with DSMs from optical satellite data with various success, dependent on the quality of the DSM as well as on the SAR imaging perspective.
S +
Footnote †: Corresponding author
## 1 Introduction
One of the strengths of using SAR images as source of information is related to near-real time applications in the context of unexpected events, e.g. earthquake, due to by the independence of weather conditions and the time of the day. However, the interpretation of scenes covering urban areas acquired with SAR is often a challenging task due to geometric distortion effects pertinent to the imaging concept.
Various simulators have been developed to ease the interpretation of SAR images of urban areas, e.g. by taking into account the electromagnetic and geometrical properties of buildings ([PERSON] et al., 2008) or by utilizing ray tracing ([PERSON] and [PERSON], 2011). GeoRaySAR is a simulator of the latter type, being developed at DLR, which enables the identification of buildings in high resolution SAR data. To this end, prior knowledge about the scene geometry has to be extracted for the automated prediction of building extents. The knowledge can be acquired from either 3D GIS models ([PERSON] and [PERSON], 2015) or from DSMs ([PERSON] et al., 2014). In ([PERSON] et al., 2014) has prior knowledge been derived from DSMs based on LiDAR data (airborne sensor) that only contained man-made structures (i.e. vegetation had been pre-filtered). However, more realistic scenarios would expect DSMs on the basis of satellite data without pre-filtered vegetation (e.g. with support of cadare information).
As a part of GeoRaySAR, geometric knowledge is extracted from a DSM, which is decomposed to add a digital terrain model (DTM) and a normalized DSM (nDSM) including elevated scene objects. Using the height models, GeoRaySAR simulates separate layers representing different signal reflection types: single, double signal reflections and a combination of both. For standard scenarios, optical data from satellites is easier to access, such as WorldView-2, compared to LiDAR data acquired from airborne sensors. Acquisition of optical data from satellites requires less planning, less man-power and is considerable cheaper than the acquisition of LiDAR data from airborne sensors. Therefore, extending the applicability of GeoRaySAR to use DSMs from high-resolution optical data is a crucial step toward realistic applications.
Studies on the generation of DSMs with the usage of optical stereo images acquired with space borne sensors are presented in e.g. ([PERSON] and [PERSON], 2006) and ([PERSON] et al., 2005). These studies indicate that DSMs generated from space borne sensor can have a height accuracy between 1 to 3 m, depending on the structure of the study area. Based on image matching with emphasis on tri-stereo data ([PERSON] et al., 2013). DSMs have been generated from high resolution WorldView-2 stereo images for urban scenes, which resulted in few height outliers. ([PERSON] et al., 2013) confirms that the geometric quality of DSMs generated from WorldView-2 data can be used for automated SAR image simulation and, based on that, the interpretation of urban scenes.
This paper presents an extended approach for interpretation of urban areas with the usage of GeoRaySAR, using an automatic pre-processing chain. The main objective of the paper is to preprocess DSMs generated from WorldView-2, a high resolution optical satellite, for identification of buildings in SAR images (acquired with the satellite TerxSAR-X). These DSMs, in comparison to the previously used manually filtered DSMs, need to be preprocessed in terms of noise reduction and removal of vegetation. For evaluation of the preprocessing and determining the limitations of GeoRaySAR, study areas consisting of diverse types of buildings and densities are chosen.
The remainder of the paper is structured as follows, section 2 describes the method for processing the DSM to provide the input models for GeoRaySAR. The chosen study areas and used data are introduced in section 3, while results of the processing chain are shown and discussed in section 4, followed by the conclusion of the paper in section 5.
## 2 Methodology
GeoRaySAR requires as input one SAR image meta file, including the parameters related to the image acquisition, and one DSM. GeoRaySAR produces as output three layers which represent direct signal response, signal double reflection and the combination of both in one layer (high reflection levels are deactivated due to the limited level of detail of the DSM). The ray tracing procedure, which relies on triangulated surface models derived from the DSM (pixels), can be conducted for different input model derived from the DSM, which provides the opportunity to separate buildings and elevated vegetation.
The DSMs used in this study have been generated by utilizing the method described in ([PERSON], 2014), a modified version of the semi global matching approach (SGM) ([PERSON], 2008). Since the main objective in this study is to predict building extents, elevated vegetation is excluded from the input to GeoRaySAR. This was done with an automatic chain for processing the DSMs generated from optical images by SGM.
The automatic preprocessing of the DSM consists of two steps, elevated vegetation filtering and noise reduction, and requires an orthorectified optical image and a DSM as input. The DSM and the optical image, which is preferably from same sensor, have to overlap each other, i.e. they have to cover the same scene, share the same spatial resolution and size in terms of number of pixels. A tiling process has been developed as an additional step for the processing of extended scenes. In sum, the steps highlighted in blue color in Figure 1 have been developed in the presented work.
### Preprocessing
Elevated vegetation is separated from buildings to predict building extents in SAR images. The extent of elevated vegetation would be detected during the ray-tracing if left untouched in the DSM, since the GeoRaySAR simulates extents of all objects in the DSM. Hence, by filtering out the elevated vegetation, the DSM scene is cleared with only buildings and ground parts (non-vegetated, vegetated) remaining.
Identifying elevated vegetation is done by calculating the normalized difference vegetation index (NDVI) and utilizing a nDSM to detect vegetational growth taller than a height threshold (see chapters 2.1.2 and 2.1.3). Grass is not separated from the DSM as it is considered as part of the ground surface. To reduce the noise in the binary mask, morphological operations are applied (see chapter 2.1.4). Elevated vegetation is separated from buildings in the DSM by using the binary mask and new height values are assigned to the affected pixels from the given DTM.
Detailed descriptions of the processing steps are presented in the following subsections (see Figure 2 for an overview, exemplifying the procedure for the Alte Pinakothek in Munich). The main objective is to connect geometric information (raw DSM) with the procedure of GeoRaySAR, while preparing the input model in an unsupervised manner.
#### 2.1.1 Fuzzy Classification Using NDVI
The well-known NDVI is derived from combining the red and near infrared band, often used to detect healthy vegetation. The idea is adopted in this work by calculating the NDVI, by either using the combination of the red and the red-edge band or the red and the infra-red band (depending on the availability). The choice of the band combination can be changed depending on the chosen sensor. Equation 1 expresses the utilized band combination with RE being the red-edge band and R the red band.
\[NDVI=\frac{RE-R}{RE+R} \tag{1}\]
Rule based fuzzy classification is utilized to classify the pixels into vegetation, using the same approach as in ([PERSON] et al., 2012); see Equation 2. Element x represents the NDVI value, c the lower threshold and d the upper threshold. The classification of vegetation results in values ranging from 0 to 1, representing the certainty that a pixel is vegetation or not. The thresholds used to classify vegetation are c = 0.2 and d = 0.4. These thresholds were chosen empirically after comparing classification results with the usage of different thresholds. A layer containing the probability of a pixel being vegetation or not is produced following this concept.
\[\text{Fuzzy }x_{c,d}=\begin{cases}0&\text{if }x<c\\ \frac{x-c}{d-c}&\text{if }c\leq x\leq d\\ 1&\text{if }x>d\end{cases} \tag{2}\]
#### 2.1.2 DTM and nDSM Generation
To keep non-elevated vegetation in the DSM, a nDSM had to be created. The reason to
Figure 1: Work flow of the simulation chain, with the preprocessing and the tiling marked in blue.
let grass remain in the DSM is to preserve ground information to enable the interpretation of ground parts. In this study grass was assumed to be vegetation with a maximum height of 20 cm. In contrast, elevated vegetation such as trees and bushes are excluded from the DSM and stored in an individual nDSM model.
The method of ([PERSON] et al., 2011) is used to generate the DTM. Generation of the DTM was done by utilizing a gray-scale reconstruction, an iterative morphological transformation that classifies the pixels into ground or non-ground. Padding is added to ease the identification of object boundaries close to the border of the image. The size of the padding is set to three pixels, assigned a value of the median DSM height in the full scene. To remove gaps in the resulting DTM, interpolation based on Delaunay triangulation is utilized, mentioned in ([PERSON] et al., 2011). After the removal of the padding pixels, the gaps in the DTM are filled with values derived from a multilevel B-Spline interpolation. By subtracting the interpolated DTM from the DSM, the nDSM is derived. Finally, non-zero elements of the nDSM (relative heights) are assigned with the original DSM heights (absolute heights).
#### 2.1.3 Separation of Elevated Vegetation
By combining the information gained from the nDSM and the fuzzy classification, a binary mask is derived, assigning elevated vegetation with a pixel value of 0 and 1 otherwise. This is done by using the classification method described in ([PERSON] et al., 2012), but with a different minimum height. Pixels that have a certainty of 50% being vegetation and are taller than the minimum height are classified as elevated vegetation, as seen in Equation 3. Pixels that are considered being elevated vegetation (v in Equation 3) received a pixel value of 0.
\[\text{Mask}=\begin{cases}0&\text{if }v\geq 50\%\text{ and }nDSM\geq 0.2\text{ m}\\ 1&\text{if }v<50\%\text{ and }nDSM<0.2\text{ m}\end{cases} \tag{3}\]
#### 2.1.4 Morphological Operations, Gap Filling and Noise Reduction
For reducing the noise in the binary mask, two morphological operations, closing and opening, were used. Closing consists of dilation, expanding the areas containing non-vegetation in this case, followed by erosion, contracting of the areas again. By performing closing, areas smaller than a fixed structure size are filled in with the pixel value 1. Opening consists of firstly executing erosion followed by dilation, which removes areas smaller than a fixed structure size. The noise in the binary mask is reduced by performing closing and opening. To reduce the noise for larger areas, the window for morphological operations is set to nine times nine pixels.
DSM pixels with discarded elevated vegetation, i.e. pixels with a value of 0 in the binary mask, receive new height values from the corresponding DTM pixel. A median filter is used for noise reduction with a window size of nine times nine pixels. The decision for median filtering was driven by the need to retain the building shapes.
#### 2.1.5 Building DSM
Based on the pre-filtered DSM and DTM, a building DSM is derived. The corresponding DSM pixels are identified by analyzing the difference between the preprocessed DSM and a DTM (value \(>\) 0). The noise in the resulting binary mask is reduced by morphological operations (opening with 9x9 window, dilation with 3x3 window). The final mask is used to extract the building-related pixels from the pre-filtered DSM. In the context of ray tracing, only the remaining DSM pixels are used for triangulation to describe the scene geometry. Accordingly, SAR signal responses are only derived for building bodies.
### Tiling
The local incidence angle is assumed to be constant during the ray-tracing, which does not correspond to reality. Moreover, while dealing with larger scenes, the image simulation requires a considerable amount of computer memory. To overcome these aspects, a tiling procedure is implemented, which introduces spatial
Figure 2: Preprocessing steps, exemplified for Alte Pinakothek in Munich, Germany. A DSM generated from WorldView-2 data (a) and an optical image from WorldView-2 (b) are used as input to the preprocessing chain. The NDVI (c) is calculated to do the fuzzy classification (d). A DTM (e) and a nDSM (f) are generated. A binary mask (g) is determined to separate elevated vegetation. Noise in the mask is reduced by the morphological operators closing (h) and opening (i). Image (j) displays the replacement of the height values and (k) the final DSM after noise reduction.
sampling of the local signal incidence angle. The splitting of the DSM is based on the distance along the longest axis, which reduces the deviation from the true signal incidence angle. The threshold for splitting the DSM is set to 1200 m for the case studies presented below. Each tile is processed by the simulation chain, followed by the merging of the tiles to derive the full scene. The resulting tiles may vary in size due to a variation of maximum heights in the tiles. To cope with this, the maximum and minimum coordinates along the two image axes among the tiles are identified, followed by the calculation of the size for the merged image. Then, an empty image layer is created and the pixel values are retrieved from the simulated tiles. The maximum intensity value is chosen in overlapping areas to keep the emphasis on the building appearance in the image.
The spatial distance is used for splitting, inherently sampling the signal incidence angle. If the tiling was based on the change of the incidence angle, one study area would be split into a different number of tiles depending on the sensor perspective. At steeper angles, the angle difference increases faster, i.e. SAR images taken at steeper angles would be split into more tiles. The local scenes would differ unnecessarily in that case, which would hamper the comparison of simulated scenes with remarkable incidence angle differences.
## 3 Study Areas and Data
The study areas consist of selected locations in the cities of Munich, London and Istanbul. The three cities contain a variation of built up areas, such as densely packed quarters, narrow streets surrounded by tall buildings and buildings of numerous sizes and types. This is of interest, since such a variation of study scenes has not been used as input in GeoRaySAR earlier.
Data from WorldView-2 is used for the separation of elevated vegetation and for the generation of the DSMs. The SAR images are acquired with TerraSAR-X, where the image meta files are used for extracting the sensor and image parameters. A DSM generated from LiDAR data for the scene of Munich is utilized for comparison with the preprocessed DSM based on WorldView-2, as seen in ([PERSON] et al., 2014). The LiDAR data was acquired April 2003, with a vertical resolution of 0.1 m and a horizontal resolution of 1 m.
The images acquired from WorldView-2 were delivered with a resolution of 0.5 m for the panchromatic images and 2 m for the multi-spectral images. The DSMs had a resolution of 0.5 m and are generated prior to this study by using the modified version of SGM ([PERSON], 2014), matching several panchromatic images captured at different viewing angles from the same orbit pass. Four images are used for DSM generation for Munich, five for London and three for Istanbul. The images were acquired 12 th of July 2010 for Munich, 22 nd of October 2011 for London and 15 th of July 2015 for Istanbul. The optical images used for the fuzzy classification were orthorectified.
The TerraSAR-X images were captured in high resolution spotlight mode with a spatial resolution of 0.6 m in range and 1.1 m in azimuth (pixel spacing along both axes: 0.5 m). Table 1 provides an overview of information on the TerraSAR-X images.
## 4 Results and Discussion
### Signal reflections
Using the preprocessed DSMs and SAR image meta data as input, SAR image layers were generated. Figure 3 shows the resulting images for the urban scene in Munich, simulated with the DSM generated from LiDAR data and the DSM generated with optical data. The area covers the area of the Viktualienmark, located in central Munich, which contains smaller buildings, such as food stalls and shops, and bigger buildings in the surrounding area.
As seen in Figure 3a, the extent of smaller stalls and shops can be seen in the center of the image, which is more difficult to detect in Subfigure 3b. This is due to the quality of the DSM, since it is being generated from optical data and processed. However, the extent of the bigger buildings can be clearly seen in both simulation results.
Figure 4 shows the result for the site in the center of London, located closely to the subway station Southward. In comparison to the scene in Munich, the area contains many residential buildings with rectangular shapes.
The simulated SAR images can be seen in Figures 4a and 4b, corresponding to the two TerraSAR-X acquisitions. The image pixels appear brighter in 4b compared to 4a. This is caused by the smaller signal incidence angle which leads to stronger diffuse signal responses from ground parts in comparison to the response of building walls, which is mapped to larger layer areas (intensity is scaled to 8-bit gray values). The steeper signal incidence angle also leads to overlay effects of nearby buildings, as changes in height are mapped to bigger range intervals.
Figure 5 shows the simulation results and related satellite images for the selected site in Istanbul, which contains the north western parts of the district Fathi. In contrast to the sites in London and Munich, this site is densely packed with small buildings on a hill. Tiling was utilized for the scene of Istanbul, due to its size. This can be seen in the northeastern parts of Figure 5a, caused by a scaling difference in the local scenes.
As seen in Figures 5a and 5b, the building extents reveal very heterogeneous appearances, partly regular in the northern part and variable for the building circle surrounding the square in the center. The shape of the hill in the scene center is visible in the DSM shown in Figure 5c with height increasing with brightness. Again, the building extents are easier to interpret for the bigger signal incidence angle (compare Figures 5a and 5b) where the layover area is more compressed. Therefore, the geometrical representation of the scene appears to be more valuable in Figure 5a, e.g., in the context of object-related applications.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline City & Date & Orbit direction & Incidence angle \\ \hline Istanbul & 2008.05.05 & Ascending & 41.0\({}^{\circ}\) \\ \hline Istanbul & 2008.05.11 & Ascending & 25.4\({}^{\circ}\) \\ \hline Munich & 2008.06.07 & Desending & 49.9\({}^{\circ}\) \\ \hline London & 2009.01.10 & Desending & 47.8\({}^{\circ}\) \\ \hline London & 2015.10.31 & Desending & 23.7\({}^{\circ}\) \\ \hline London & 2015.11.01 & Desending & 47.7\({}^{\circ}\) \\ \hline \end{tabular}
\end{table}
Table 1: Details about used TerraSAR-X data Figure 4: Residential buildings around Southwalk in London. SAR image simulated using the preprocessed DSM (a, b); images c) and d): related TerraSAR-X images acquired in January 2010 and October 2015.
Figure 5: North western parts of district Fathi in Istanbul; (a) and (b): simulated SAR images; (c): unprocessed DSM; (d) and (e): corresponding TerraSAR-X images acquired on 5 th of May 2008 and 11 th of May 2008; (f): optical image, acquired with WorldView-2 on 15 th of July 2015.
Figure 3: Simulation for Viktualienmark in Munich, using the DSM generated from LiDAR data (a) and the preprocessed DSM generated from WorldView-2 data (b). Image (c) shows the corresponding TerraSAR-X image acquired on June 7 th, 2008.
### Identification of Building Pixels
The site of Frauenkirche in Munich was chosen to display the building extent difference between the preprocessed DSM and the building DSM, seen in Figure 6.
Smaller red dots can be seen spread around the scene, which is most likely elevated vegetation that was not correctly separated during the preprocessing. A few pixels in the building DSM has not been successfully detected as objects above ground, visualized as bright grey located which breaks through the building DSM in red. This is an indication that objects were not completely removed in the DTM, since the height difference between the DSM and the DTM was close to zero. However, most building extents have been traced completely.
### Impact of DSM Quality
A test site in the south of Barbican in London was chosen for the evaluation of the quality of the DSMs, which is based on three DSMs generated with a different number of WorldView-2 images. One DSM was generated with two images, one with three and one with five. Figure 7 presents the SAR simulation results and the corresponding input DSMs.
The case study confirms that the simulation benefits from the increase of more images used for the generation of the DSM. The resulting SAR image is less noisy and reveals less gaps and errors for facade parts. More interestingly, however, the results indicate that DSMs based on two images lead to acceptable results, i.e., building extents are already described for most buildings. This is promising in the context of realistic scenarios where the availability of satellite data sets is limited.
## 5 Conclusion and Outlook
It has been shown in this paper that DSMs derived from WorldView-2 data are of sufficient geometric quality for automated SAR simulation in urban areas. For preparing the necessary input models, a dedicated preprocessing chain has been developed which contains filtering, decomposition, and tiling steps. The presented case study results for Munich, London, and Istanbul indicate the capabilities and limits of the simulation method, the latter being primarily related to the input models and the availability of multispectral information for filtering.
Differences in the appearance of buildings have been compared for DSMs generated from LiDAR data (airborne sensor) and DSMs generated from WorldView-2 data. The simulated images reveal a better description of small buildings for the LiDAR-based DSM whereas the appearance of buildings of larger scale is comparable. Hence, object-based SAR applications related to buildings or building-blocks are realistic, e.g., in the context of city monitoring or change detection.
The impact of a signal incidence angle differences was exemplified for urban scenes, showing better separability of building layover areas for steeper incidence angles. Hence, SAR data with bigger signal incidence angles are expected to be more suitable for dense urban scenes. Finally, DSMs generated from 2 or 3 WorldView-2 images are applicable for SAR simulation, even if the difference to results using DSMs from 5 WorldView-2 images is obvious (geometric completeness, noise level).
With the implementation of an automatic preprocessing chain for DSMs generated from optical images, the usage of GeoRaySAR has broadened. Hence, it is not limited to data that only contain buildings, such as 3D GIS models and LiDAR DSMs delivered without elevated vegetation. Data acquired from high-resolution multi-spectral satellite based sensors can be integrated into the developed automatic chain, but would require small adjustments (changes to e.g. the band combination for the NDVI, the meta
Figure 6: Frauenkirche in Munich. (a) displays the simulated SAR image using the preprocessed DSM; (b) shows the simulated building extent based on the building DSM as input, being overlaid on the simulated image.
Figure 7: Impact of DSM quality on simulated SAR images; scene: Barbican, London. The DSMs have been generated from different sets of optical images; (a, c, e) simulated SAR images; (b, d, f) DSMs derived from 2, 3, and 5 WorldView-2 images; acquisition date of TerraSAR-X image: November 1 st, 2015 data import and naming convention). Hence, future studies with GeoRaySAR may extend simulations to further sensor data and expand the study areas to other complex areas sites.
## References
* [PERSON] et al. (2011) [PERSON], [PERSON], [PERSON] and [PERSON], [PERSON], 2011. Iterative approach for efficient digital terrain model production from CARTOSAT-1 stereo images. _Journal of Applied Remote Sensing_ 5, pp. 1-19.
* identification based on CityGML data. _ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences_ pp. 9-16.
* [PERSON] et al. (2013) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], [PERSON] and [PERSON], [PERSON], 2013. Operational generation of high resolution digital surface models from commercial tri-stereo satellite data. _Photogrammetric Week 2013, Stuttgart, Germany_ pp. 261-269.
* [PERSON] et al. (2008) [PERSON], [PERSON], [PERSON] and [PERSON], 2008. Model-based interpretation of high-resolution SAR images of buildings. _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_ 1(2), pp. 107-109.
* [PERSON] and [PERSON] (2011) [PERSON] and [PERSON], 2011. SAR-simulation of large urban scenes using an extended ray tracing approach. In: _2011 Joint Urban Remote Sensing Event_, pp. 289-292.
* [PERSON] (2008) [PERSON], 2008. Stereo processing by semiglobal matching and mutual information. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ 30(2), pp. 328-341.
* [PERSON] et al. (2005) [PERSON], [PERSON] and [PERSON], 2005. DSM generation from high resolution satellite imagery using additional information contained in existing DSM. _ISPRS Workshop 2005 Hannover, Germany_.
* Journal of Cartography and Geographic Information_ 64(2), pp. 74-80.
* [PERSON] et al. (2012) [PERSON], [PERSON], [PERSON] and [PERSON], 2012. Fusing stereo and multispectral data from WorldView-2 for urban modeling. In: _Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XVIII 2012, Baltimore, USA_, pp. 1-15.
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], 2014. Automatic SAR simulation technique for object identification in complex urban scenarios. _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_ 7(3), pp. 994-1003.
* [PERSON] and [PERSON] (2006) [PERSON] and [PERSON], 2006. Multi-image matching for DSM generation from IKONOS imagery. _IEEE Journal of Photogrammetry and Remote Sensing_ 60(3), pp. 195-211.
|
isprs
|
EXPLOITATION OF DIGITAL SURFACE MODELS GENERATED FROM WORLDVIEW-2 DATA FOR SAR SIMULATION TECHNIQUES
|
R. Ilehag, S. Auer, P. d’Angelo
|
https://doi.org/10.5194/isprs-archives-xlii-1-w1-55-2017
| 2,017
|
CC-BY
|
isprs/a4ce02d7_f211_459b_874d_629f8d1396a8.md
|
Landslide Features Interpreted by Neural Network Method Using a High-Resolution Satellite Image and Digital Topographic Data
[PERSON] and [PERSON]
Corresponding author. Assistant prof., Dept. of Civil Eng., MUST, No. 1, Hsin-Hsing Rd., Hsin-Chu 304, TAIWAN. [EMAIL_ADDRESS] b Dept. of Civil Eng., NCTU & ERL, ITRI, Hsin-Chu 311, TAIWAN. [PERSON]L_ADDRESS]
###### Abstract
Landslides are natural phenomena for the dynamic balance of earth surface. Due to the frequent occurrences of Typhoons and earthquake activities in Taiwan, mass movements are common threatens to our lives. Moreover, it is a common practice for the agencies of water reservoirs in Taiwan to make a reconnaissance of the landslides of the watershed every 5 to 10 years for the purpose of conservation. It is found that the application of aerial photo-interpretation technique for this purpose has been recognized as an effective approach since 1970s. However, an efficient and automatic interpretation scheme has never been established. Therefore, two issues are to be resolved for creating a useful and timely landslide database, i.e. the consistency of the sub-datasets and the completeness of the coverage. As the manual interpretation and automatic recognition are compared, the former is a practical and operational method, but the result it derived is largely dependent on the professional background of interpretation operator.
In this paper, the interpretation knowledge is quantified into recognition criteria. Multi-source data, e.g. a Quickbird satellite image, DTM reduced from a LIDAR data, road and river vector data, are fused to construct the feature space for landslides analysis. Then, those features are used to recognize landslides by a multilayer perceptron (MLP) Neural Network Method. The extraction result is evaluated in comparison with the manual-interpretation result. The experiments indicate that the conducted method can assist landslide investigation efficiently and automatically. Moreover, the ANN method is better than some statistic classification methods, e.g. Maximum Likelihood method, due to its adaptability for multi-resource data and no predefined assumption.
## 1 Introduction
Landslides are natural phenomena for the dynamic balance of earth surface. The potential or intrinsic factors of landslide include geological and morphological factors and the external or triggering factors include earthquake, climate, hydrology, and human activities. When the geology is highly fractured and landforms are in high relief. In addition, the frequent earthquakes and heavy rainfalls are together imposing further stress to the earth to break the balance of the nature. And, thus, mass movements such as landslides, slumping, and mudflows take places.
### Motivations
Moreover, it is a common practice for the agencies of water reservoirs in Taiwan to make a reconnaissance of the landslides of the watershed every 5 to 10 years for the purpose of conservation. It is found that the application of aerial photo-interpretation technique for this purpose has been recognized as an effective approach since 1970s. However, an efficient and automatic interpretation scheme has never been established. Therefore, two issues are to be resolved for creating a useful and timely landslide database, i.e. the consistency of the datasets and the completeness of the coverage. As the manual interpretation and automatic recognition are compared, the former is a practical and operational method, but the result it derived is largely dependent on the professional background of interpretation operator.
It is usually taking a long time to make a large-scale and real-time mapping of landslides after a torrential rainfall. The first general mapping of landslides in Taiwan was conducted by Soil and Water Conservation Bureau in 1982-1989 and a landslide map of Taiwan in a scale of 1/50,000-1/100,000 (COA, 1991). In 8 years of survey, there were more than 10 times of torrential rainfalls and 100 times of earthquakes and new balance of the nature took time and time again. In reality, to map all the landslides in one time is not feasible. And, it is understandable that the difficulties of obtaining a survey with completeness of a whole Taiwan coverage. It has been a common practice to interpret aerial photographs by visual inspection of an expert geologist. It is a time consuming task. Therefore, the purpose of this study is to implement the human rules and quantifies the criteria to install an automatic system by a back-propagation Neural Network Method.
### Overview and References to related works
Landslides cause approximately 1000 deaths a year worldwide with a property damage of about US$4 billion, and pose serious threats to settlements and structures that support transportation, natural resource management and tourism. In many cases, over-expanded development and activities, such as slope cutting and deforestation, can sometimes increase the incidence of landslide disasters. Recent development in large metropolitan areas intrudes upon unstable terrain. This has thrown many urban communities into disarray, providing grim examples of the extreme disruption caused by ground failures ([PERSON] & [PERSON], 2000).
Aerial photography has been used extensively to characterize landslides and to produce landslide inventory maps, particularly because of their stereo viewing capability and high spatial resolution ([PERSON], 1985, [PERSON], 1987). However, the conventional photo-interpretation is a time-consuming and costly approach ([PERSON] et al., 2001).
Satellite imagery can also be used to collect data on the relevant parameters involved such as soils, geology, slope, geomorphology, land use, hydrology, rainfall, faults, etc. Multispectral images are used for the classification of lithology, vegetation, and land use, Stereo SPOT imagery is used in geomorphological mapping or terrain classification ([PERSON], 1987; [PERSON] et al., 1989; [PERSON], 1997; [PERSON], 1999; [PERSON], 2002).
For landslide inventory mapping the size of the landslide features in relation to the ground resolution of the remote sensing data is very important. A typical landslide of 40000 m\({}^{2}\), for example, corresponds with 20x20 pixels on a SPOT Pan image and 10*10 pixels on SPOT multi-spectral images. This would be sufficient to identify a landslide that has a high contrast, with respect to its surroundings e.g. bare scarps within vegetated terrain, but it is insufficient for a proper analysis of the elements pertaining to the failure to establish characteristics and type of landslide. Imagery with sufficient spatial resolution and stereo capability such as SPOT or IRS can be used to make a general inventory of the past landslides. However, they are mostly not sufficiently detailed to map out all landslides ([PERSON] et al., 2003). It is expected that in the future the Very High Resolution (VHR) imagery, such as from IKONOS-2, might be used successfully for landslide inventory ([PERSON], 2000). By using the criteria for visual interpretation, artificial intelligent of expert system and automatic procedures can be developed to improve the efficiency and accuracy of landslide mapping ([PERSON] et al., 2000, [PERSON] et al., 2001).
Artificial Neural Networks (ANNs) have been used successfully in many applications such as pattern recognition, function approximation, optimization, forecasting, data retrieval, and automatic control ([PERSON], 1990, [PERSON], 1992). ANNs have been found to be powerful and versatile computational tools for organizing and correlating information in ways that have proved useful for solving certain types of problems too complex, too poorly understood, or too resource-intensive to tackle using more traditional computational methods.
## 2 Methodology
### Traditional landslide interpretation methods
Individual landslides are generally small and located in certain locations of a slope. Landslides occur in a large variety, depending on the type of movement such as (slide, topple, flow, fall, spread), the speed of movement (mm/year-m/sec), the material involved (rock, debris, soil), and the triggering mechanism (earthquake, rainfall, human interaction). Survey methods usually include ground survey, aerial or space-borne survey, or a combination.
Ground survey can be high accurate, but slow. When hazards take places, accessibility is low. Therefore, it is impossible to make the survey in near real-time or in a complete coverage after a torrential rainfall.
Photographic or image interpretation approach can be adopted and implemented manually, automatically, or semi-automatically. Manual interpretation requires well-trained geologist to delineate the landslides under a stereoscopic environment. The advantage of this approach is that individual landslide can be defined very clearly. However, the subject judgement is the disadvantage. Automatic classification of landslides is based on certain criteria and computing algorithms. The advantage for image classification is the objectiveness of the approach. In a real case, limitations are due to the spatial and spectral resolutions of the images. More than 50% of the rainfall-induced landslides in Taiwan are less than 50 m in length. Landslides of this scale are not readily identifiable using images of a pixel-size larger than 10 m. By pixel-wise classification, landslides can occupy only individual or just a few pixels without forming an outer shape of landslides. Moreover, commission and omission errors can further complicate the situation.
### Interpretation Signatures
Key rules for this study are summarized from literatures, case studies, and expert experiences, as shown in Table 1.
\begin{tabular}{|l|l|l|} \hline Key Rule & Contents \\ \hline Colour Tone & Brown, deep brown, bright brown, green \\ Criterion & brown \\ \hline Location & In the vicinity of ridge lines, road sides, and the cut-off side of a river channel \\ \hline Shape & Lenticular-shaped or spoon-shaped, or cumulated as tree-shaped in river basins, or a triangular or rectangular-shape if located near river banks \\ \hline Direction & The longitudinal axis is in the direction of gravity or perpendicular to flow-lines \\ \hline Shadow & Shadows are applied to assist the interpreter to percept river bottoms and ridges in 2D images \\ \hline \end{tabular}
The rules of interpretation for landslides in Table 1 are to be implemented as computing algorithms for automatic identification. For example, the colour tone of a new landslide is usually an expression of bare lands with unique spectral signature. NDVI (Normalized Vegetation Index) is one of the 20 vegetation indices, useful for this purpose. Equation of NDVI is as follows:
\[NDVI=\frac{NIR-R}{NIR+R} \tag{1}\]
This index is derived from the reflectance of red band and NIR band. It is also an indicator of biomass. The value of NDVI is in the range of -1 and +1. A negative value designates a bare land.
The location criterion of a landslide can be realized by using DTM (Digital Terrain Model) for generating a ridgeline and by digitising roads from the 1:5000 orthophoto maps, which are the most common maps in Taiwan. Subsequently, a vicinity analysis can be implemented.
The direction criterion is implemented by intersection operation of the ridgelines and buffer zones generated by riverlines.
The shape criterion and shadow criterion are not implemented in this study. However, slope criterion is added. Statistics shows that highest possibility of landslides take place on slopes of 15\({}^{\circ}\)-30\({}^{\circ}\), and then on slopes of 30\({}^{\circ}\)-45\({}^{\circ}\) ([PERSON] et. al., 2003).
A synergy of satellite images, DTM, existing roads, and drainage lines is better implemented in a neural network system as adopted in this study. A scoring scheme is used to transform the above-mentioned criteria into the neurons of input layer of the artificial neural network as shown in Table 2.
\begin{tabular}{|l|l|l|l|l|l|} \hline Colour Criterion & \multicolumn{2}{l|}{Direction} & \multicolumn{2}{l|}{Location Criterion} \\ & & \multicolumn{2}{l|}{Criterion} & \multicolumn{2}{l|}{(Ridge line)} \\ \hline NDVI & Score & Buffer & Score & Buffer & Score \\ Value & & size & size & size & size \\ \hline \(<\) 0.0 & 1.0 & \(<\) 50 m & 1.0 & \(<\) 50 m & 1.0 \\ \hline
0.0–0.25 & 0.8 & 50–100 & 0.8 & 50–100 & 0.8 \\ \hline
0.25\({}^{-}\)-0.5 & 0.6 & 100–150 & 0.6 & 100–150 & 0.6 \\ \hline
0.5–0.75 & 0.4 & 150–200 & 0.4 & 150–200 & 0.4 \\ \hline \end{tabular}
\begin{tabular}{|l|l|l|l|l|l|} \hline Key Rule & \multicolumn{2}{l|}{Contents} \\ \hline Colour Tone & Brown, deep brown, bright brown, green \\ Criterion & brown & & & \\ \hline Location & In the vicinity of ridge lines, road sides, and \\ Criterion & the cut-off side of a river channel & & & \\ \hline Shape & Lenticular-shaped or spoon-shaped, or cumulated as tree-shaped in river basins, or a triangular or rectangular-shape if located near river banks & & \\ \hline Direction & The longitudinal axis is in the direction of & & \\ Criterion & gravity or perpendicular to flow-lines & & \\ \hline Shadow & Shadows are applied to assist the interpreter to & & \\ Criterion & percept river bottoms and ridges in 2D images & & \\ \hline \end{tabular}
Table 1 rules of interpretation for landslides
### Traditional landslide interpretation methods
Individual landslides are generally small and located in certain locations of a slope. Landslides occur in a large variety, depending on the type of movement such as (slide, topple, flow, fall, spread), the speed of movement (mm/year-m/sec), the material involved (rock, debris, soil), and the triggering mechanism (earthquake, rainfall, human interaction). Survey methods usually include ground survey, aerial or space-borne survey, or a combination.
Ground survey can be high accurate, but slow. When hazards take places, accessibility is low. Therefore, it is impossible to make the survey in near real-time or in a complete coverage after a torrential rainfall.
Photographic or image interpretation approach can be adopted and implemented manually, automatically, or semi-automatically. Manual interpretation requires well-trained geologist to delineate the landslides under a stereoscopic environment. The advantage of this approach is that individual landslide can be defined very clearly. However, the subject judgement is the disadvantage. Automatic classification of landslides is based on certain criteria and computing algorithms. The advantage for image classification is the objectiveness of the approach. In a real case, limitations are due to the spatial and spectral resolutions of the images. More than 50% of the rainfall-induced landslides in Taiwan are less than 50 m in length. Landslides of this scale are not readily identifiable using images of a pixel-size larger than 10 m. By pixel-wise classification, landslides can occupy only individual or just a few pixels without forming an outer shape of landslides. Moreover, commission and omission errors can further complicate the situation.
### Interpretation Signatures
Key rules for this study are summarized from literatures, case studies, and expert experiences, as shown in Table 1.
\begin{tabular}{|l|l|l|l|l|l|} \hline Key Rule & \multicolumn{2}{l|}{Contents} \\ \hline Colour Tone & Brown, deep brown, bright brown, green \\ Criterion & brown & & \\ \hline Location & In the vicinity of ridge lines, road sides, and \\ Criterion & the cut-off side of a river channel & & \\ \hline Shape & Lenticular-shaped or spoon-shaped, or cumulated as tree-shaped in river basins, or a triangular or rectangular-shape if located near river banks & & \\ \hline Direction & The longitudinal axis is in the direction of & & \\ Criterion & & gravity or perpendicular to flow-lines & \\ \hline Shadow & Shadows are applied to assist the interpreter to & & \\ Criterion & & percept river bottoms and ridges in 2D images & \\ \hline \end{tabular}
The rules of interpretation for landslides in Table 1 are to be implemented as computing algorithms for automatic identification. For example, the colour tone of a new landslide is usually an expression of bare lands with unique spectral signature. NDVI (Normalized Vegetation Index) is one of the 20 vegetation indices, useful for this purpose. Equation of NDVI is as follows:
\[NDVI=\frac{NIR-R}{NIR+R} \tag{1}\]
This index is derived from the reflectance of red band and NIR band. It is also an indicator of biomass. The value of NDVI is in the range of -1 and +1. A negative value designates a bare land.
The location criterion of a landslide can be realized by using DTM (Digital Terrain Model) for generating a ridgeline and by digitising roads from the 1:5000 orthophoto maps, which are the most common maps in Taiwan. Subsequently, a vicinity analysis can be implemented.
The direction criterion is implemented by intersection operation of the ridgelines and buffer zones generated by riverlines.
The shape criterion and shadow criterion are not implemented in this study. However, slope criterion is added. Statistics shows that highest possibility of landslides take place on slopes of 15\({}^{\circ}\)-30\({}^{\circ}\), and then on slopes of 30\({}^{\circ}\)-45\({}^{\circ}\) ([PERSON] et. al., 2003).
A synergy of satellite images, DTM, existing roads, and drainage lines is better implemented in a neural network system as adopted in this study. A scoring scheme is used to transform the above-mentioned criteria into the neurons of input layer of the artificial neural network as shown in Table 2.
\begin{tabular}{|l|l|l|l|l|l|l|} \hline Colour Criterion & Direction & Location & Criterion \\ & & & (Ridge line) & \\ \hline NDVI & Score & Buffer & Score & Buffer & Score \\ Value & size & size & size & size \\ \hline \(<\) 0.0 & 1.0 & \(<\) 50 m & 1.0 & \(<\) 50 m & 1.0 \\ \hline
0.0–0.25 & 0.8 & 50–100 & 0.8 & 50–100 & 0.8 \\ \hline
0.25–0.5 & 0.6 & 100–150 & 0.6 & 100–150 & 0.6 \\ \hline
0.5–0.75 & 0.4 & 150–200 & 0.4 & 150–200 & 0.4 \\ \hline \end{tabular}
### An Artificial Neural Network (ANN) Classifier
An Artificial Neural Network (ANN) is a simulation of the functioning of the human nervous system that produces the required response to input ([PERSON], 1990). ANN is able to provide some of the human characteristics of problem-solving ability that are difficult to simulate using logical, analytical techniques. One of the advantages of using ANN is that it doesn't need a predefined knowledge base. ANN can learn associative patterns and approximate the functional relationship between a set of input and output. A well-trained ANN, for example, may be able to discern, with a high degree of consistency, patterns that human experts would miss. In a neural network, the fundamental variables are the set of connection weights. A network is highly interconnected and consists of many neurons that perform parallel computations. Each neuron is linked to other neurons with varying coefficients of connectivity that represent the weights (sometime is refereed as strengths in other literature) of these connections. Learning by the network is accomplished by adjusting these weights to produce appropriate output through training examples fed to the network ([PERSON], 1992).
The multilayer perceptron (MLP) is one of the most widely implemented neural network topologies. The article by [PERSON] is probably one of the best references for the computational capabilities of MLPs. Generally speaking, for static pattern classification, the MLP with two hidden layers is a universal pattern classifier. In other words, the discriminant functions can take any shape, as required by the input data clusters. Moreover, when the weights are properly normalized and the output classes are normalized to \(0/1\), the MLP achieves the performance of the maximum a posteriori receiver, which is optimal from a classification point of view. In terms of mapping abilities, the MLP is believed to be capable of approximating arbitrary functions. This has been important in the study of nonlinear dynamics, and other function mapping problems. The MLPs are trained with error correction learning, which means that the desired response for the system must be known, as well known as backpropagation algorithm ([PERSON], 1992). The objective of learning is to minimize the error (RMS in this case) between the predicted output and the known output.
An MLP type neural network model was utilized in this work using NeuroSolutions 4.24 software (NeuroDimension, 2004) developed by NeuroDimension, Inc. The architecture of a network that consists of (a) one input layer that contains 4 input variables, (b) one hidden layer of 5 nodes, (c) one output layer that contains 1 output variable, and (d) connection weights that connect all layers together.
There are two important parameters including a learning rate coefficient (Eta) and a momentum factor (Alpha) during training. In general, Eta's valid range is between 0.0 and 1.0. Although a higher Eta provides faster learning, it can also lead to instability and divergence. A small Eta offers improved numerical convergence, however training time is greatly increased. When a new ANN training is initiated, the user must provide a starting Eta value. It is advisable to start with a small number because it is conservative. When a value in the range of 0.001 to 0.1 is used, it normally starts a smooth training process without the risk of divergence.
The Alpha damps high frequency weight changes and helps with overall algorithm stability, while promoting faster learning. For most of the networks, Alphas are in the range of 0.8 to 0.9. However, there is no definitive rule regarding Alpha. Higher momentum values (between 0.8 and 0.9) are most commonly used since the damping effect usually helps training characteristics. If training problems occur with a given alpha value, different values can be tried. In NeuroSolutions, the user can define this parameter. After several times of test, the alpha value is set to be 0.7 in this study.
The transfer function for PEs serves the purpose of controlling the signal strength for its output. The input for the transfer function is the dot product of all PEs' input signals and weight vectors of the PE. The four commonly used transfer functions are the Sigmoid, Gaussian, Hyperbolic Tangent and Hyperbolic Secant. In general, the Sigmoid function {1/(1+e\({}^{3}\))} will produce the most accurate model, but the learning rate will be slower as compared to other functions. The Sigmoid function acts as an output gate that can be either opened at 1 or closed at 0. Since the function is continuous, it allows the gate to be opened partially (any value between 0 and 1). Hyperbolic Tangent is selected as the transfer function in this study.
Cross validation is a highly recommended method for stopping network training in the NeuroSolutions. This method monitors the error on an independent set of data and stops training when this error begins to increase. This is considered to be the point of best generalization. The testing set is used to test the performance of the network. Once the network is trained the weights are then frozen, the testing set is fed into the network and the network output is compared with the desired output. Twenty percentage of training data is used to be a cross validation and test dataset in this work.
## 3 Case Study and Discussions
### Test datasets and Pre-processing
Jiu-fen-ell mountain is selected as the test area, which is a typical area of landslides especially after by the big shock of the Chi-Chi earthquake at Nantou County of central Taiwan on 1999/09/21. Datasets collected for this study include Quickbird images, digital vector maps including river lines and roads obtained from 1:5000 photomaps, DTM, and airborne LIDAR data ([PERSON], 2002).
Quickbird images are registered to the vector datasets by using image-to-map function of ENVI 3.5, which is applying an affine transformation as shown in Figure 1, where the false color image is a composite of bands NIR, G, and B. Roads are designated as yellow colour and river as blue colour.
NDVI for colour tone criterion is executed using the function TRANSFORM-NDVI of ENVI 3.5, as shown in Figure 2.
Digital Elevation Model (DEM) of the study area is abstracted from airborne LIDAR data, retrieved by a Fortran program developed by the authors, to match with the satellite image. Thus, regular DEM is generated by interpolation of inverse distance and used for generating ridge-lines and slope gradients.
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline
0.75\(\sim\)1.0 & 0.2 & 200\(\sim\)250 & 0.2 & 200\(\sim\)250 & 0.2 \\ \hline Location (Roads) & Criterion & Slope (1) & Slope (2) & \\ \hline Buffer size & Score & SLOPE value & Score & SLOPE value & Score \\ \hline \(<\) 50 m & 1.0 & \(<\) 5’ & 0.0 & 60’\(\sim\)75’ & 0.0 \\ \hline
50\(\sim\)100 & 0.8 & 5’\(\sim\)15’ & 0.09 & \(>\)75’ & 0.0 \\ \hline
100\(\sim\)150 & 0.6 & 15’\(\sim\)30’ & 0.52 & & \\ \hline
150\(\sim\)200 & 0.4 & 30’\(\sim\)45’ & 0.35 & & \\ \hline
200\(\sim\)250 & 0.2 & 45’\(\sim\)60’ & 0.04 & & \\ \hline \end{tabular}
\end{table}
Table 2: Interpretation CriteriaThe criteria of location and direction are fulfilled by synergizing these results.
### Results
The area size of the test area is in 1229x1209 pixels with pixel size of 2.44 m. A correlation analysis is carried out for the four factors, namely NDVI, slope, direction, and locations (Table 3). As indicated in Table 3, correlation coefficients are very low in general except V3 and V4 with a coefficient of 0.19. The highest value of correlation with the target V5 is colour tone V1 with a value of 0.25; and then the slope V2. Principle component analysis is also applied to extract 4 components, and thus to reduce the correlation between factors. The relation between factors and the targets are also reduced accordingly. Therefore, the components are not adopted for the input of neural network. Subsequently, information obtained by visual interpretation as shown in Figure 4. is used to extract inputs of neural network by extracting 5%, 10%, 15%, 20%, and 25% of data. Under 4-6-1 neural network structure, various subsets of random samples apply on 1000 training cycles. Learning errors for ANN training are shown in Table 4. The MSE (Mean Square Error) is higher than the threshold of 0.1 required by ANN. The correlation coefficient is 0.64, indicating that input datasets are not highly correlated with the targets. So many as 1000 training cycles are applied to observing learning error curve to see whether it is possible to reduce the MSE to as low as 0.1. As shown in Figure 5, after 100 training cycles the network becomes stable. Classification is further conducted using the trained network as shown in Table 5. A successful rate of classification is 85% for landslide and 73% for non-landslide. The omission and commission error are 0.27 and 0.15, respectively. The accuracy could be affected by following factors:
When the pixels with NDVI larger than 0.25 are filtered out for the area according to manually-interpretated landslides. Result shows that the correlation between colour tone and the target is raised to 0.47. With this condition, the MSE becomes accepted with a value smaller than 0.1 in a new ANN training cycle. And the correlation between the factor and the target becomes 0.75. However, the accuracy of non-landslide is not improved.
## 4 Conclusions
Some of the criteria for manual interpretation such as shape criterion and shadow criterion have not been implemented in this study due to inadequacy of information. This can be the reason that the final successful rate of identification of landslide is only 85%. Further research is required to improve both the spatial analysis algorithm and the data sources. Nevertheless, some findings are concluded in this study.
1. It is feasible to gain a synergy of information on high resolution images, digital terrain models, existing roads and drainage systems and automate the information for landslide identification.
2. The correlation analysis of the four criteria for manual interpretation shows that only direction and location criteria are correlated. And, only colour tone criterion is better correlated with the target.
3. Under 4-6-1 ANN network structure, the MSE is 0.43 after training cycles, not acceptable to the threshold of 0.1. Furthermore, a correlation coefficient of 0.64 indicates that the neurons and the targets are not highly correlated. These could be due to the mismatch of the date of various data sources.
4. Result of the classification shows a successful rate of 85% for landslide and 75% for non-landslide. The omission and commission error is 0.27 and 0.15, respectively.
5. As shown in this study, GIS functions such as buffering, spatial intersection, overlay, and terrain analysis are employed. A system for landslide interpretation would require capabilities both from a GIS and an image analysis system.
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline \multicolumn{3}{|c|}{(c) 15\% samples} & \multicolumn{3}{c|}{(d) 20\% samples} \\ \hline MSE & 0.39 & MSE & 0.41 \\ \hline ERROR(\%) & 11.6 & ERROR(\%) & 12.3 \\ \hline r & 0.68 & r & 0.67 \\ \hline \multicolumn{3}{|c|}{(c) 25\% samples} & \multicolumn{3}{c|}{(f) all samples} \\ \hline MSE & 0.38 & MSE & 0.51 \\ \hline ERROR(\%) & 11.2 & ERROR(\%) & 14.8 \\ \hline r & 0.70 & r & 0.55 \\ \hline \end{tabular}
\end{table}
Table 4: Learning errors for ANN training
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline \multicolumn{3}{|c|}{Correlation between Vectors of Values} \\ \cline{2-5} \multicolumn{1}{|c|}{} & V1 & V2 & V3 & V4 & V5 \\ \hline V1 & 1.000 &.013 &.012 &.009 &.230 \\ V2 &.013 & 1.000 & -.044 &.011 &.087 \\ V3 &.012 & -.044 & 1.000 &.161 & -.010 \\ V4 &.009 &.011 &.161 & 1.000 &.002 \\ V5 &.230 &.087 & -.010 &.002 & 1.000 \\ \hline \end{tabular}
\end{table}
Table 3: Correlation between four signatures and target.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline \multicolumn{3}{|c|}{(a) 5\% samples} & \multicolumn{3}{c|}{(b) 10\% samples} \\ \hline MSE & 0.46 & MSE & 0.44 \\ \hline ERROR(\%) & 13.8 & ERROR(\%) & 13.0 \\ \hline r & 0.62 & r & 0.64 \\ \hline \end{tabular}
\end{table}
Table 3: Correlation between four signatures and target.
\begin{table}
\begin{tabular}{|l|c|c|c|c|c|} \hline \multicolumn{3}{|c|}{(c) 5\% samples} & \multicolumn{3}{c|}{(d) 20\% samples} \\ \hline MSE & 0.39 & MSE & 0.41 \\ \hline ERROR(\%) & 11.6 & ERROR(\%) & 12.3 \\ \hline r & 0.68 & r & 0.67 \\ \hline \multicolumn{3}{|c|}{(c) 25\% samples} & \multicolumn{3}{c|}{(f) all samples} \\ \hline MSE & 0.38 & MSE & 0.51 \\ \hline ERROR(\%) & 11.2 & ERROR(\%) & 14.8 \\ \hline r & 0.70 & r & 0.55 \\ \hline \end{tabular}
\end{table}
Table 4: Learning errors for ANN training
## References
* [ENVI2001] ENVI, 2001. _ENVI 3.5 User Guide_, Research Systems, Inc.
* [[PERSON] et al.2003] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON], 2003. _Landslide Identification Using SPOT Imageries and Aerial Photographs_, Journal of Photogrammetry and Remote Sensing, 8(4), pp. 29-42. (in chinese)
* [[PERSON] et al.2002] [PERSON], [PERSON], [PERSON], [PERSON], 2002. _Multi-annual SPOT images for monitoring large-scaled landslide_. Proceedings of symposium of annual meeting of Chinese Association of Geographic Information System. Taichung.
* [[PERSON] et al.2000] [PERSON], [PERSON], and [PERSON], 2000. _Strategy on the Landslide Type Analysis Based on The Expert Knowledge and the Quantitive Prediction Model_, International Archives of Photogrammetry and Remote Sensing, Vol. XXXIII, Part B7, pp. 701-708, Amsterdam.
* [Liang1997] [PERSON], 1997. _Satellite images as applied to the investigation of landslides and hot springs_. MSc. Thesis, Institute for Applied Geology, National Central University.
* cases from Taiwan_. Proceedings of Advanced Technology for Monitoring and Processing Global Environmental Data. Converted in the University of London by the Remote Sensing Society, UK and CERMA, USA, 10-12 September 1985. pp. 223-232.
* [[PERSON]] [PERSON] [PERSON], 1987. Automation for landslide analysis using digital images, Remote Sensing, Vol. 8, pp. 60-90. (in chinese)
* [Liu1999] [PERSON], 1999. _A practical approach to the installation of a national landslide database on basis of a SPOT mosaic_. Proceedings of the 18\({}^{\rm{th}}\) symposium on survey and mapping. Ilan, p.561-570.
* [[PERSON] et al.2001] [PERSON], [PERSON], [PERSON], [PERSON], 2001. _Images analysis for landslides induced by torrential rainfall_. Proceedings of symposium on civil engineering technology and management for 21 century. Hsin-chu. P.C-21-C-31.
* [NeuroDimension2004] NeuroDimension, Inc., 2004. _NeuroSolusions 4.24 Getting Started Manual_, Gainesville, USA.
* [[PERSON]] [PERSON], 1990. _Neurocomputing_, Addison-Wesley Pub. Co., pp. 21-42.
* [[PERSON]] [PERSON], 2002. _A study of the deformation of earthquake hazard area employing airborne laser scanner (2/2)_. Research Report of Agriculture Council, 2002.
* [[PERSON] and Mattar2000] [PERSON] and [PERSON], 2000. _SAR Image Techniques for Mapping Areas of Landslide_, International Archives of Photogrammetry and Remote Sensing, Vol. XXXIII, Part B7, pp. 1395-1398, Amsterdam.
* [[PERSON]] [PERSON],1978. _Slope Movement Types and Processes_, In Landslides Control, eds. [PERSON] and [PERSON], National Academy of Sciences, Washington, D.C. pp.11-33.
* [[PERSON]] [PERSON], 2000. _Remote Sensing for Natural Disaster Management_, International Archives of Photogrammetry and Remote Sensing, Vol. XXXIII, Part B7, pp. 1609-1617, Amsterdam.
* [[PERSON] et al.1989] [PERSON], [PERSON], [PERSON], 1989. _Landslide investigation of the slopelands in Taiwan_. Agriculture Council, Bureau of Soil and Water Conservation.
* [[PERSON]] [PERSON], 1992. _Introduction to Artificial Neural Systems_, West Pub. Co., pp.163-248.
## Acknowledgements
The authors would like to express their sincere appreciation to National Science Council of Taiwan for financial support. We would thanks Taiwan Intergraph Co. and PitoTech. Co. for their technical support on GIS and ANN. The Council of Agriculture for providing the LIDAR data for the experiments is also appreciated.
Figure 1: Quickbird image as registered on vector rivers and roads.
Figure 5: ANN learning curve
Figure 3: Hillsheded relief overlaid with ridge lines.
Figure 2: NDVI image of the study area.
Figure 1: Quickbird image as registered on vector rivers and roads.
|
isprs
|
A Structure from Motion photogrammetry-based method to generate sub-millimetre resolution Digital Elevation Models for investigating rock breakdown features
|
Ankit K. Verma, Mary C. Bourke
|
https://doi.org/10.5194/esurf-2018-53
| 2,018
|
CC-BY
|
isprs/5ab8fde8_d931_4a91_9c8a_9b891dd022bd.md
|
# Applicability Assessment of UAV Mapping for Disaster Damage Investigation in Korea
[PERSON]
1
[PERSON]
1
[PERSON]
1
1 National Disaster Management research Institute, 365 Jongga-ro, Jung-gu, Ulsan, 44538, Rep. of Korea - (sskim73, ddahoon, jz309)@korea.kr
###### Abstract
As natural disaster occurs, the local and the central government should investigate the damaged fields promptly, analyze quantitatively the degree of damage, and establish an appropriate disaster recovery plan in accordance with Framework Act on the Management of Disasters and Safety in Korea. The purpose of this study is to assess the applicability of UAV photogrammetry for the management of natural disaster. First, we suggest small easy-to-use UAV-based investigation procedure for natural disaster damaged area in the phase of disaster recovery in Korea. Before drone-based aerial surveying, the field survey can be performed with DGPS RTK for GCPs setting-up around disaster site. In this paper, we generate three dimensional terrain information and high-resolution ortho-imagery and then analyse quantitatively damage degree by natural disaster using commercial UAVs and drone mapping technique. Finally, we evaluate the mapping accuracy and work efficiency of drone mapping for disaster investigation application through comparing with traditional investigation work process which was dependent on labour-intensive field survey. The resolution ortho-image map of within less 5 cm of GSD generated by aerial photographs acquired from UAVs at the altitude of 100m\(-\)250m enabled us to check damage information such as facilities destroy or the trace of soil erosion around the river flooded and reservoir collapsed area. In addition, three dimensional point cloud data of landslide-damaged areas enabled us to more accurately measure the width and the depth of outflows caused by landslides, soil runoff distance, and landslide damage area. The photogrammetry-based drone mapping technology for the disaster damage investigation is expected to be an alternative approach to support or replace the labour-intensive disaster site survey that needs to investigate the disaster site quickly and timely.
Footnote †: leftmargin=*] *
Footnote †: leftmargin=*] *
Footnote †: leftmargin=*] *
## 1 Introduction
Due to global climate changes and rapid urbanization, our society has been faced with difficult and unpredictable disasters. Natural disaster closely related to meteorological phenomena and geographic factors can reduce slightly damage extent by taking preparative and preventive measures. For effective and systematical disaster management, far-reaching researches and promising technical development have been carried out actively to minimize the damages by natural disaster using advanced observation platforms and observation sensors: satellite-based and aerial mapping platform, high-precision mobile mapping system (MMS), UAV-based LiDAR, etc.
Despite the dramatic growth of these technologies, socio-economic losses and casualties caused by natural disaster have been increased in Korea due to rapid urbanization and global climate change. The total cost of recovery by natural disaster is estimated about 400 million US dollars per a year over past decades. Most of the damage by natural disaster around the Korean Peninsula has been caused by flood and landslide such as a typhon, a heavy rainfall for the rainy season. Occurring natural disaster in Korea, the local and the central government should investigate the damaged sites promptly, analyse quantitatively the extent of damage, and establish an appropriate disaster recovery plan in accordance with Framework Act on the Management of Disasters and Safety.
Since 2013, Korea's National Disaster Management research Institute (NDMI), responsible for implementing R&D related to national disaster and safety management, has studied for disaster scientific investigation (DSI). DSI, a highly organized framework to find the root cause of disasters, aims to implement, monitor and feedback with disaster profiling through state-of-the-art forensic technologies. As the feasible operational tools for DSI, NDMI has started to adapt and operate the various types of investigation platforms and devices: a MMS-type specialized vehicle, UAVs, ultrasonography detector, rebar detector, etc. ([PERSON] et al, 2018).
In recent years, Unmanned Aerial Vehicle (UAV) with various on-board sensors is considered to be a cost-effective tool for large scale aerial mapping. The suitability of small drones for mapping applications is dependent on mapping extent, geometric accuracy, the durability such as flight time and control distance, and the lifting capacity of UAV. It is equipped with the GNSS/IMU, MEMS, gyroscopes, accelerometers, and barometer to conduct direct sensor orientation. Precise time-tagging of the camera shutter and GNSS time enable it to annotate the position and attitude data on the metadata of captured imagery ([PERSON] et al., 2013). As an alternative to AT (Aerial Translation), direct geo-referencing of airborne sensor is to measure the position and orientation of an airborne mapping sensor so that each pixel or range can be geo-referenced to the specific map projection system without any kind of ground information collected in the field ([PERSON] et al., 2015). Direct Geo-referencing is a suitable process to create of accurate map products rapidly from UAV aerial imagery with minimal GCPs or without GCPs. This is efficient to produce the newst map rapidly where does not allow to approach such as disaster area needed emergent response.
UAV mapping enable field investigator to generate the damaged map where does not allow to approach such as disaster area. The accuracy of on-board GNSS in UAV is however significantly lower than the accuracy of GPS used in general aerial surveying due to its own specification. Thus, related further researches are required to improve the accuracy of map.
This study aims to propose a photogrammetry-based drone mapping approach for the damage investigation by naturaldisaster and to assess its applicability for timely natural disaster mapping and monitoring.
## 2 Study Area and Method
### Area of Study
The study sites were selected on considerable damaged areas where landslides, river floods, and reservoir collapses have occurred due to heavy rains and a typhoon in 2018.
From late June to early July 2018, there were heavy rainfalls around southern area of Korean peninsula. In this paper, the study sites were selected on considerable damaged areas where landslides, river floods, and reservoir collapses have occurred due to heavy rains and a typhoon at that time (Figure 1).
During this period, due to the heavy and continuous rainfall in Boesong-gun, more than 15 ha of agricultural area was flooded and the landslides occurred in 17 steep mountainous areas in national and private forests. Cumulative precipitation during this period was 236 mm \(\sim\) 327.5 mm around Jellanam-do.
### UAV System
The small rotor wing and the fixed wing drones were adopted in the paper for the natural disaster site investigation.
Drones for investigating damages occurred by natural disaster were selected focusing on aspects of safety, performance, and usability.
DITS Inspire2 is a commercial drone with a relatively stable flight performance and has a battery capacity of about 25 minutes with longer endurance flight than the Inspire1. Zenmuse X5S camera is mounted with a 3-axis gimbal which is suitable for taking aerial photography for drone mapping.
A fixed-wing FireFly6 being capable of a vertical take-off and landing, is a rotor-tilling type drone on a flight mission, so it does not need a plenty of runway space for take-off and landing, which is advantageous for disaster field operation. A fixed wing drone enable users to carry out mapping mission with relatively stable and efficient flight performance, so it has advantages for long flight mapping in wide area disaster area.
Sony A6000 optical camera mounted on FireFly6(tm) has a built-in 24.3 MP APS-C sensor that allows users to collect high-resolution aerial photographs with a 20 mm lens and a GSD of 2.36 cm at a flight altitude of 120m.
Recently, it is possible to acquire relatively precise flight information of the drone through a post-processing kinematic RTK (PPK) technology. In addition, a variety of multi-sensing data around disaster area can be collected from the multi-spectral sensor or the thermal camera depending on the purpose of applications.
so that it is suitable to utilize for disaster response applications ([PERSON] et al., 2019).
Drone mapping is a simplified aerial photogrammetry-based data processing procedure using ultra-small built-in MEMS of a drone, though the accuracy of the IOPs or the EOPs for the non-metric camera mounted on the drones is somewhat poor.
Through the iterative bundle adjustment process using high capacity computing technology, drone mapping can generate the high-precise 3D terrain model and the ortho-image similar to the accuracy of the conventional aerial photogrammetry by optimization of IOPs and EOPs.
In this study, the process of drone mapping is as follows: 1) flight planning, 2) GNSS-RTK surveying, 3) aerial photos capturing, 4) keypoints extraction and image matching, 5) 3D point cloud extraction, 6) DSM/DEM generation, and finally 7) ortho-image generation. Before generating DSM and DEM, accurate 3D point clouds were generated by extracting points of interest, optimizing camera models optimization, and improving the accuracy of image matching. In order to conduct drone mapping, we used Pix4 Dcapture and FireFlyPlanner for flight planning and aerial photographs acquisition, and Pix4 DmapperTM and Agisoft PhotoScan for aerial image processing acquired by a drone.
## 3 Drone Mapping for Natural Investigation
Natural disaster damage investigation in Korea
In general, disaster management consists of following four steps: prevention, preparation, response, and recovery.
When natural disaster occurs in Korea, disaster damage investigation led by local government is carried out as follows; disaster damage reporting, field investigation, damage analysis, recovery planning and implementation.
According to the Disaster Recovery Guideline under Framework Act on the Management of Disasters and Safety, the heads of the local government in damaged area by natural disaster should promptly the disaster site within the legal period (about 10 days) and report damage status for facilities to the central government through National Disaster Management System (NDMS), and then establish a recovery plan and implement it.
If there is more damage than the recovery cost support criteria of the central government determined by the financial index of the local government, the central disaster safety headquarters will investigate directly and then judge whether or not to declare special disaster area. (MOIS, 2018).
Local government should register the damage states of facilities by natural disaster on the disaster register of the National Disaster Management System (NDMS), and all procedures in the disaster recovery phase for the disaster management such as the damage investigation of the disaster site, the damaged level analysis, the recovery cost calculation, and recovery implementation, etc. are managed through NDMS.
### Aerial photo capturing on Disaster site
In this study, the rotary and the fixed wing drones were utilized for damage investigation at five sites: 2 landslides, 2 floods, and 1 collapse of the reservoir, respectively.
Aerial photographs of 5 damaged areas were collected from on-board optic camera of drones and post-processed by drone mapping procedure to assess the quantitative damage level.
We planned to capturing aerial imagery at flight heights of 100-150m. A total of 1110 images were acquired at 5 damaged sites: 264 images at 2 landslide site, 579 images at 2 flooded area, 267 reservoir collapse site, respectively.
The aerial images collected from drones are as follows (Table 3).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Acquisition & Damaged & Disaster & Collected & UAVs \\ time & sites & type & imagery & type \\ \hline
11 July, 2018 & Mundeok & Landslide & 123 & Rotary \\ \hline & Hoichen & Reservoir collapse & 267 & Fixed wings/ \\ \cline{2-4}
12 July,2018 & Hwajuk & Landslide & 141 & Rotary \\ \cline{2-4} & Miryek river & Flooding & 251 & Rotary \\ \cline{2-4} & Ujeong river & Flooding & 328 & Rotary \\ \hline \end{tabular}
\end{table}
Table 3: Aerial imagery of damaged areas acquired from UAVs
Figure 4: Disaster investigation procedures in Korea
Figure 3: Procedure of drone mapping
### Natural disaster drone mapping
In fact, considering the limitation of work to be done in the hard-to-reach disaster area where the damage investigation requiring time-consuming works such as the field photographing and the damage surveying depended on tapeline or eye measuring is not easy to conduct within a short investigation period (about 10 days). Therefore, it needs to apply the current technology level of drone mapping capable of quickly capturing and mapping aerial photographs at inaccessible disaster sites.
Direct Geo-referencing is a suitable process to create of accurate map products rapidly from UAV aerial imagery with minimal GCPs. This technology is efficient to produce the newest map rapidly where does not allow to access such as disaster area. Direct geo-referencing in UAV photogrammetry is to measure the position and orientation of an on-board camera directly so that each pixel can be geo-referenced to the Earth without GCPs. It requires precise location and attitude information for the on-board camera mounted on the gimbal of UAV. The accuracy of the GNSS/INS mounted on the UAV, however, is not enough to utilize the information as it provides. A complementary task is necessary for improving the accuracy ([PERSON] et al., 2019).
The accuracy of the mapping using the small drones depends on various factors such as the terrain of the mapping area, the precision of the on-board sensors of drone, the image overlap/sidelap rate, weather conditions, flight speed and stability of drone, GNSS surveying conditions, etc.
In the case of mapping using geo-tagged image information obtained in autonomous flight mode from barometer and GNSS /INS mounted on the drone without GCPs, the previous related literature says the accuracy of drone mapping using small drones is expected about 1-2 times of GSD horizontally and 1-3 times of GSD vertically for a correct estimation model (Pix4D, 2019; [PERSON] et al., 2011). In addition, drone mapping using GCPs can improve the geo-referencing accuracy with cm-level, although there are some differences depending on the distribution and the number of GCPs and the accuracy of GNSS surveying.
In this study, the horizontal accuracy of the result generated by performing drone mapping without GCPs in a dangerous and difficult-to-reach disaster sites. In the case of drone mapping of a mountainous region with an irregular altitude variance at the flight altitude of 200m, the maximum horizontal position error was about 9m. In the flat area, the maximum error is measured with \(1-2.3\)m.
The damage analysis by natural disaster is carried out using 3D map data (Point Cloud, DSM/DEM), Ortho-imagery based on location-based GIS data derived by using Pix4D Mapper and Agisoft PhotoScan.
### Natural disaster damage analysis
#### 3.3.1 Damage analysis by landslide
In the Moonduk area where road facilities has been lost due to landslides and soil erosion, 89 drone photographs were taken and the ortho-image map of about 3 cm level with spatial resolution was finally generated through drone mapping.
The total drone aerial mapping area was about 89,497.29 m\({}^{2}\), the landslide damage area on the image map was about 3,333.5 m\({}^{2}\) and the whole runoff length by the landslide was about 332.9 m. The mountainous area occurred a landslide can be restrictions on the drone mapping because it is not only difficult to operate with the line-of-sight flight of drones but also generate accurate 3D terrain model due to occlusion area under the dense and thick forest. It is also expected that the landslide investigation of mountainous terrain which is difficult to guarantee the quality of drone mapping due to irregular altitudes can qualitatively investigate the overall damage situation and environmental landslide inducing factors in the inaccessible damage area by using videos sequences captured drones.
#### 3.3.2 Damage analysis by reservoir collapse
The total 267 drones aerial photographs were taken into the area where debris outflow occurred as a result of the collapse of reservoir due to strong heavy rain, and an ortho-image map of about 4 cm level with spatial resolution was generated.
The total drone aerial mapping area was is about 1.5 km\({}^{2}\), and the soil leak area of the river bank on the generated 3D terrain model and ortho-image map is about 1.3 km, and the damaged area estimated based on the runoff trace on the ortho-image map is about 24,248.8 m\({}^{2}\).
The decay length between the top of the reservoir was about 23.9m and the whole bank distance was about 129m on the 3D terrain model. As the damaged area due to disasters is wide and mapping coverage increases, the fixed wing with the longer
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline \multirow{2}{*}{Damaged sites} & Type of & RMS X & RMS Y \\ & disaster & [m] & [m] \\ \hline \multirow{2}{*}{Mundeok} & Landslide & 2.3 & 8.8 \\ \hline \multirow{2}{*}{Hoichen} & Reservoir & \multirow{2}{*}{1.1} & \multirow{2}{*}{3.2} \\ & collapse & & \\ \hline \multirow{2}{*}{Miryek} & Flooding & 2.2 & 3.5 \\ \hline \end{tabular}
\end{table}
Table 4: Geo-corrcion accuracy small drone mapping
Figure 5: Small UAVs operation in disaster field
Figure 6: Landslide damage analysis using drone mappingflight endurance is expected to be more efficient the rotor wing drones.
#### 3.3.3 Damage analysis by flood
Many river facilities were lost during long heavy rainfall and floods and debris occurred at this time was run off into agricultural area. For investigate these areas, a total of 249 aerial photos were taken and produced 3D DSM/DEM and an ortho-image map with a spatial resolution of 2 cm level.
The drone mapping area was about 90,778.1 m and debris runoff area into the agricultural land on the mapping area was analysed to be about 4,390.6 m.
Inundation and loss of facilities occurred mainly in agricultural area and around river facilities such as bridges, river veins, bank protection facilities, access road along the bank of river, etc. The quantitative flood damage analysis results; flooded area in farmland, levee leakage, and collapse distance etc. can be utilized as objective investigation data for establishing natural disaster recovery plan.
## 4 Conclusions
Natural disasters, which are closely related to global weather, climate, and environmental factors, can be effective in reducing damage by proactive preparation and prevention compared to social hazards that are difficult to predict. Therefore, the promising technologies development and far-reaching research for recent UAVs related to disaster management are actively in progress to minimize damage extent through timely and effective preparation, prevention, response, recovery of disasters.
In this paper, we suggest an investigation approach of disaster damage using commercial small drones for five disaster sites such as in-situ landslide, river flooding, and reservoir collapse areas through photogrammetry-base drone mapping at the flight altitude of 100-200m, and perform the quantitative analysis of natural disaster damage extent, finally we can get following conclusions;
First, we generate 3D terrain information and high-resolution ortho-imagery and then analyse quantitatively damage degree by natural disaster using commercial UAVs and drone mapping technique. We also evaluate the mapping accuracy and work efficiency of drone mapping for disaster investigation application through comparing with traditional investigation work process which was dependent on labour-intensive field survey. The maximum horizontal position error was about 9 m at irregular height mountainous area, but, in the flat area, the maximum error is measured with 1 - 2.3m.
Second, the resolution ortho-image map of within less 5 cm of GSD generated by aerial photographs acquired from UAVs at the altitude of 100m-250m enabled us to check damage information such as facilities destroy or the trace of soil erosion around the river flooded and reservoir collapsed area.
Finally, three dimensional point cloud data of landslide-damaged areas enabled us to more accurately measure the width and the depth of outflows caused by landslides, soil runoff distance, and landslide damage area.
The photogrammetry-based drone mapping technology for the disaster damage investigation is expected to be an alternative approach to support or replace the labour-intensive disaster site surveying that needs to timely investigate the disaster site quickly.
## Acknowledgements
This research outputs are the part of the project \"Development of forensic investigation for Disaster Scene\", which is supported by the NDMI (National Disaster Management research Institute) under the project number NDMI-MA-2019-05-01. The authors would like to acknowledge the financial support of the NDMI.
## References
* [1] [PERSON]. [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2011. The Accuracy of Automatic Photogrammetric Techniques on Ultra-Light Uav Imagery, International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. XXXVIII-1 / C22, pp. 125-130.
* [2] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2019, Applicability of Drone Mapping for Natural Disaster Damage Investigation, _Journal of Korean Society for Geographical Information Science_, Vol.27 No.2 March 2019 pp.13-21
* [3] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2018, Rapid Disaster Mapping through Data Integration from UAVs and Multi-sensors Mounted on Investigation Platforms of NDMI, Korea _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XIII-3/W4, 2018 GeofInformation For Disaster Management (Gi4 DM)_, 18-21 March 2018, Istanbul, Turkey.
* [4]
Figure 8: Flood damage analysis using drone mapping
Figure 7: Flood damage analysis by reservoir collapse
[PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], s., 2015. Direct Geo-referencing on Small Unmanned Aerial Platforms for Improved Reliability and Accuracy of Mapping Without the Need for Ground Control Points, _International Archives of the Photogrammetry, Remote Sensing and Spatial Information Science, Vol. XL-1/W4, UAV-g 2015_, York University, Toronto, Canada.
Ministry of the Interior and Safety, 2018, Guidelines for Planning Natural Disaster Investigation and Recovery Plans 2018.
[PERSON], [PERSON] and [PERSON], 2017, Disaster Damage Detection Using Drone Aerial Images, In 2017 Joint Fall Conference proceedings of the Korean Society for Geo-spatial Information Science, pp. 213-214.
Pix4D, 2019, Accuracy of Pix4D Outputs, [[https://support.pix4d.com/hc/en-us/articles/202558889-Accuracy-of-Pix4D-outputs](https://support.pix4d.com/hc/en-us/articles/202558889-Accuracy-of-Pix4D-outputs)]([https://support.pix4d.com/hc/en-us/articles/202558889-Accuracy-of-Pix4D-outputs](https://support.pix4d.com/hc/en-us/articles/202558889-Accuracy-of-Pix4D-outputs)) (last date accessed at 05 March
[PERSON] [PERSON], [PERSON] [PERSON], [PERSON] [PERSON] 2013. A Micro-UAV with the Capability of Direct Georeferencing, _International Archives of the Photogrammetry, Remote Sensing and Spatial Information Science, Vol. XL-1/W2, UAV-g 2013_, Rostock, Germany.
[PERSON], [PERSON] [PERSON], [PERSON] [PERSON] 2013. Direct Georeferencing of Ultrahigh-Resolution UAV Imagery, _IEEE Transaction on Geoscience and Remote Sensing,_ 52(5): 2738-2745.
[PERSON] 2011, Taking Computer Vision Aloft-Archaelological Three-dimensional Reconstructions from Aerial Photographs with Photoscan, _Archaeological Prospection_. 18, 67-73.
|
isprs
|
APPLICABILITY ASSESSMENT OF UAV MAPPING FOR DISASTER DAMAGE INVESTIGATION IN KOREA
|
S. S. Kim, T. H. Kim, J. S. Sim
|
https://doi.org/10.5194/isprs-archives-xlii-3-w8-209-2019
| 2,019
|
CC-BY
|
isprs/eaaf9b07_06b0_451b_8e92_ac71aefddf25.md
|
Leaf Area Index Estimation in VNEYARDS from UAV Hyperspectral Data, 2D Image Mosaics and 3D Canopy Surface Models
[PERSON]\({}^{*}\)
\({}^{*}\)[PERSON]\({}^{*}\)
\({}^{*}\), [PERSON]\({}^{\rm{h}}\)
\({}^{\rm{h}}\)
\({}^{*}\) _up2 metric_, Athens, Greece,
ilias@up2 metric.com, christos@up2 metric.com
\({}^{\rm{b}}\) Laboratory of Photogrammetry, Technological Educational Institute of Athens, Greece,
[EMAIL_ADDRESS]
\({}^{\rm{c}}\) Remote Sensing Lab., National Technical University of Athens,
[EMAIL_ADDRESS]
###### Abstract
The indirect estimation of leaf area index (LAI) in large spatial scales is crucial for several environmental and agricultural applications. To this end, in this paper, we compare and evaluate LAI estimation in vineyards from different UAV imaging datasets. In particular, canopy levels were estimated from _i.e._, _(i)_ hyperspectral data, _(ii)_ 2D RGB orthophotonosaics and _(iii)_ 3D crop surface models. The computed canopy levels have been used to establish relationships with the measured LAI (ground truth) from several vines in Nemea, Greece. The overall evaluation indicated that the estimated canopy levels were correlated (\(r^{2}>\)73%) with the in-situ, ground truth LAI measurements. As expected the lowest correlations were derived from the calculated greenness levels from the 2D RGB orthomosaics. The highest correlation rates were established with the hyperspectral canopy greeness and the 3D canopy surface models. For the later the accurate detection of canopy, soil and other materials in between the vine rows is required. All approaches tend to overestimate LAI in cases with sparse, weak, unhealthy plants and canopy.
Footnote †: This contribution has been peer-reviewed.
doi:10.5194/lsprsarchives-XL-1-W4-299-2015
Footnote †: This contribution has been peer-reviewed.
## 1 Introduction
Biomass and leaf area index (LAI) are important variables in many ecological, environmental and agricultural applications. Accurate estimation of biomass is required for carbon stock accounting and monitoring, while LAI, which is defined as the one half of the total leaf area per unit ground surface area, controls many biological and physical processes in the water, nutrient and carbon cycle. These key crop parameters are frequently used to assess crop health status, nutrient supply and effects of agricultural management practices [[PERSON] et al., 2013], [[PERSON] et al., 2014].
In particular, for precision agriculture applications LAI is associated with agronomic, biological, environmental, and physiologic processes, which are related to growth analysis, photosynthesis, transpiration, interception of radiation, and energy balance [[PERSON] et al., 2000], [[PERSON] et al., 2004], [[PERSON] et al., 2012], [[PERSON] et al., 2013], [[PERSON] et al., 2015]. It is also one of the most relevant indices applied to experimentation, even for crop yield prediction and water balance modelling in the soil-water-atmosphere system [[PERSON] et al., 2014].
Direct methods are the most precise, but they have the disadvantage of being extremely time-consuming and as a consequence making large-scale implementation only marginally feasible. Precision problems may in this case result from the definition of LAI, the scaling-up method, or from the error accumulation due to frequently repeated measurements. LAI estimation with direct methods are the most precise and therefore are often implemented as calibration tools for indirect measurement techniques. Indirect optical observations on LAI can be well correlated with vegetation indices like NDVI for single plant species which are grown under uniform conditions. However, for mixed, dense and multi-layered canopies, these indices have non-linear relationships and can only be employed as proxies for crop-dependent vegetation parameters such as fractional vegetation cover, LAI, albedo and emissivity.
Recent advances in remote sensing and photogrammetry have combined 3D measurements with rich spectral information, yielding unprecedented capabilities for observing crops, biodiversity and ecosystem functioning. Aerial manned and unmanned systems are gaining continually important research and development efforts and also market share for several geospatial applications due to the describing cost and increasing reliability. In particular, for precision agriculture applications many studies beyond estimating a standard NDVI map aim to build consistent calibrated models and validate them against the accurate estimation of crop LAI. The estimation of the canopy volume through the calculation of 3D models and other metrics of vertical structure have been, already, employed from several studies for estimating aboveground biomass and carbon density, biomass change and LAI [[PERSON] and [PERSON], 2013], [[PERSON] et al., 2015]. While conventional airborne LIDAR acquisitions have become less expensive over time, they remain very costly for researchers and other end-users, especially if required at high spatial resolution over a few small areas or at high temporal frequencies [[PERSON] and [PERSON], 2013].
In this paper, the estimation of crop leaf area index is performed based on three different imaging datasets acquired from an unmanned aerial vehicle (UAV). Hyperspectral data, 2D RGB image mosaics and 3D crop surface models have been used to establish relationships with the measured on the ground LAI from several vine crops in Nemea, Greece. The overall evaluation indicated that the joint use of both the hyperspectral data and the crop surface model resulted into the highest correlation with the ground truth. The quite promising experimental results indicate that hyperspectral sensors along with low cost RGB cameras can provide high spatial and rich spectral information for estimating accurately key crop paramters.
## 2 Materials and Method
Nemea Study Area:Our experiments were performed in the study area of Nemea which is located in the North-East of Peloponese, with the Agiregitivo wine variety the dominating one for red winemaking. In particular, Nemea- Agiregitiko is the grape allowed to use the Nemea Appellation (PDO Nemea). During this study we focused on vineyards near the semi- mountainous village of Asprokmohos at an altitude of about 700m above sea level. Aerial and concurrent field campaigns were conducted with a low-cost standard RGB camera, a push-broom hyperspectral sensor and a portable spectroradiometer.
Aerial campaign:An aerial campaign with an unmanned aerial vehicle (Figure 1) was conducted on the \(3^{rd}\) of August 2014 at the Nemea study area. A multicopter (OnyxStar BAT-F8, Altigator, Belgium) with electronic controllers and navigation systems (BL-Ctrl V2.0, Navi-Ctrl v2.0, Mikrokopter, Germany) equipped with:
* a push-broom hyperspectral VNIR imaging sensor (Micro-Hyperspec A-Series 380 nm-1000 nm, Headwall Photonics, USA)
* a low-cost standard RGB camera (_i.e.,_ GoPro Hero3)
was employed. The sensors were mounted and stabilized thought a camera gimbal (AV200, PhotoHigher, New Zealand). The hyperspectral sensor was connected through a frame grabber with a custom-made lightweight mini-ITX with low power consumption (Figure 1). A GoPro Hero3+ Black Edition was, also, concurrently onboard the UAV delivering video and images at certain time intervals.
Field campaign:Along with the aerial an intensive field campaign was conducted in order to collect reference/ ground truth data including the precise location and variety of each parcel,
Figure 2: The Green-Red Vegetation Index (GRVI) was calculated on the detected canopy from the aerial RGB orthomosaic.
vineyard or vine row [10]. Existing maps with geographic information and varietal plantation were verified or updated during field surveys. In situ reflectance measurements were performed using the GER 1500 (Spectre Vista Corporation, US) portable spectroradiometer which provides spectra with 512 spectral bands distributed in the spectral region from 350 nm to 1050 nm with 3.2 nm FWHM. The position of each measurement was recorded using a portable GPS. Moreover, at certain locations with vigour and non vigorous plants, LAI was assessed directly by a non-destructive precise counting of all leaves per vine. In particular, after collecting the aerial and ground reflectance data the mean leaf area was estimated along with the number of leaves per sampling location.
**Automatic aerial image orientation:** An indispensable step for the generation of both 2D RGB orthomosaic and 3D canopy model is the estimation of image orientations and camera calibration. This is performed via an automatic image-based framework. In a first step all GoPro views are corrected from severe radial distortion effects due to the fish-eye lens. Following a hierarchical image orientation process (structure from motion, SFM) all available images are relatively oriented and optimally calibrated. This procedure incorporates 2D feature extraction and matching among images, outlier detection for the elimination of false point correspondences, orientation initialization through closed-form algorithms and a final self-calibrating bundle adjustment solution. It should be noted that all these steps are applied at successive image scales in order to handle effectively the large number of high resolution images. The resulted orthophotonosaic from the collected aerial RGB images is shown in (Figure 2a).
**3D canopy model:** Once all aerial images are oriented a dense point cloud is generated by employing dense stereo and multi-image matching algorithms. The 3D point cloud is then converted to a 3D model (3D Mesh) through 3D triangulation and finally to a DSM by keeping the highest elevation for every planimetric ground position. Appropriate texture is also computed for each 3D triangle via a multi-view algorithm, using a weighted blending scheme. The resulted DSM from the collected aerial RGB images is shown in (Figure 3a). In order to estimate more precisely the volume of the canopy the soil between the vine rows was detected. Having detected both the canopy and the soil in 2D, the DTM was estimated based on a morphological reconstruction approach. The 3D model of the detected canopy was then calculated after projecting the estimated from the DSM canopy height on the DTM (Figure 3b).
**2D RGB imaging mosaics:** Combining the oriented image set with the reconstructed DSM of the vineyard, a 2D orthomosaic is produced by a multi-image algorithm based on automatic visibility checking and texture blending that can compensate for different orientations, scale and resolution of the images involved. The resulted orthomosaic from the collected aerial RGB images is shown in Figure 2a.
**2D hyperspectral canopy greenness:** Due to the movement and the vibrations of the UAV platform the raw hyperspectral data acquired from the push-broom sensor were highly distorted. In order to perform a rough geometric correction, every single scan line was aligned with the precedent, via a 1D transformation that minimized their intensity differences. In particular, each scanline is shifted (upwards and downwards) relatively to the precedent one for a range of different displacements by one pixel step at a time (e.g. from -20 to 20 pixels). At each discrete displacement a cost is computed as the sum of the intensities absolute difference of the current scanline to the previous one. By a winner takes all (WTA) scheme the displacement with the minimum cost is chosen and applied to the selected scanline. The same procedure is repeated for every consecutive scanline and a final 2D roughly undistorted hyperspectral mosaic is generated. All the above computations for estimating the required displacements are performed based a (narrow) color composite which resembles a standard RGB and the final estimated shifts are then applied to entire hypercube.
Moreover, in the same locations with the in-situ reflectance data the relationship with the aerial hyperspectral data were estimated.
Figure 4: The estimated canopy greenness map based on the calculation of a narrow NDVI from the UAV hyperspectral data
Figure 3: The estimated DSM and 3D model of the canopy derived from the aerial imagery and the low-cost standard RGB lightweight camera.
The high correlation rate (\(r^{2}>\)94%) indicated the consistency of the acquired dataset [10]. The narrow NDVI was calculated from the hyperspectral data and through a further classification the different canopy greenness levels were estimated which can be associated with the vegetative canopy vigour, biomass, leaf chlorophyll content, canopy cover and structure. The resulted canopy greenness map is shown in Figure 4.
## 3 Experimental Results and Evaluation
Aerial and in-situ data were collected during the veraison period at the Nemea study area. Aerial data were collected from a multi-copter with a low cost standard RGB camera (i.e., GoPro Hero3), a push-broom hyperspectral sensor, a lightweight single board computer and a frame grabber Figure 1. The goal was to benchmark the estimation of LAI from the acquired hyperspectral data, the RGB orthomosaic and the 3D canopy model against the in-situ LAI measurements.
The resulted orthomosaic from the collected aerial RGB images and the calculated GRVI index on the detected canopy are shown in Figure 2. The resulted DSM from the aerial dataset with texture from the RGB orthomosaic and the calculated 3D canopy model after the estimation of the soil in-between the vine rows are shown in Figure 3. The estimated canopy greenness map based on the calculation of a narrow NDVI from the UAV hyperspectral data are shown in Figure 4.
For the quantitative evaluation firstly the relation between the es
Figure 5: The relation between the estimated canopy levels between the 2D RGB mosaic and (a) the 3D canopy (b) the hyperspectral map.
Figure 6: The relation between the calculated LAI (ground truth, GT) and the estimated canopy (a) from the 2D GRVI map, (b) from the 3D model and (c) from the hyperspectral map.
timated canopy levels between the 2D RGB mosaic and (a) the 3D canopy (b) the hyperspectral map were performed in (Figure 5). The correlation between the calculated GRVI (from the 2D RGB orthomosaic) and the canopy greenness from the hyperspectral data were relative high and above 84%, while the one between the calculated GRVI and the 3D canopy was lower at approximately 79%. The highest relations (\(r^{2}>90\%\)) were established between the estimations from the hyperspectral data and the 3D canopy model.
Regarding the relations against the ground truth (direct, in-situ LAI measurements) the experimental results followed a similar pattern Figure 6. The LAI estimation from the hyperspectral data and the 3D canopy model resulted in higher correlations rates (\(r^{2}>80\%\)), while the ones from the 2D RGB orthomosaic relative lower (\(r^{2}<73\%\)).
The aforementioned results indicate that LAI was estimated more accurately from the hyperspectral data and 3D canopy model. It should be noted that for the hyperspectral data just a standard narrow NDVI was employed, while more sophisticated indices may would have correlated better in terms of chlorophyll concentrations, etc. Both datasets seems to fail more in cases with lower LAI values over sparse, weak, unhealthy plants and canopy.
## 4 Conclusion and Future Perspectives
In this paper, LAI estimation from three different UAV-based imaging sources was validated against direct, in-situ LAI measurements. In particular, canopy levels were estimated from _i.e., (i)_ hyperspectral data, (ii)_ 2D RGB orthomosaics and (_iii)_ 3D crop surface models. The computed canopy levels have been used to establish relationships with the measured LAI (ground truth) from several vines in Nemea, Greece. The overall evaluation indicated that the estimated canopy levels were correlated (\(r^{2}>\)73%) with the ground truth. Between the different observations the hyperspectral and the 3D model established the highest relations. Moreover, as expected, the lowest correlations against the ground truth data were derived from the calculated greenness levels from the 2D RGB orthomosaics. The highest correlation rates were established for the hyperspectral and the 3D canopy levels. The experimental results and the evaluation indicated that the leaf area index in vineards can be approximated from both hyperspectral sensors and 3D canopy models. For the later the accurate detection of canopy, soil and other materials in between the vine rows is required. Further validation in several vineards, vine varieties and other crop types is required in order to conclude on the optimal, efficient and cost-effective manner for LAI estimation from UAVs.
## References
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON] [PERSON], 2015. Comparative analysis of different retrieval methods for mapping grassland leaf area index using airborne imaging spectroscopy. International Journal of Applied Earth Observation and Geoinformation.
* [PERSON] et al. (2004) [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], 2004. Hyperspectral vegetation indices and novel algorithms for predicting green LAI of crop canopies: Modeling and validation in the context of precision agriculture. Remote Sensing of Environment 90(3), pp. 337-352.
* [PERSON] et al. (2013) [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], [PERSON] and [PERSON], 2013. Evaluating Spectral Indices from WorldView- 2 Satellite Data for Selective Harvesting in Vineards. In: 9 th European Conference on Precision Agriculture.
* [PERSON] et al. (2015) [PERSON] [PERSON], [PERSON] and [PERSON], 2015. Spectral Discrimination and Reflectance Properties of Various Vinee Varieties from Satellite, UAV and Proximate Sensors. In: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-7/W3, 36 th International Symposium on Remote Sensing of Environment.
* [PERSON] et al. (2012) [PERSON], [PERSON] and [PERSON], [PERSON], 2012. Assessment of vegetation indices for regional crop green LAI estimation from landsat images over multiple growing seasons. Remote Sensing of Environment 123(0), pp. 347-358.
|
isprs
|
LEAF AREA INDEX ESTIMATION IN VINEYARDS FROM UAV HYPERSPECTRAL DATA, 2D IMAGE MOSAICS AND 3D CANOPY SURFACE MODELS
|
I. Kalisperakis, Ch. Stentoumis, L. Grammatikopoulos, K. Karantzalos
|
https://doi.org/10.5194/isprsarchives-xl-1-w4-299-2015
| 2,015
|
CC-BY
|
isprs/84a00e01_0bb7_4ac7_9aa7_46f66b5b3bcc.md
|
# Accuracy Assessment of Crown DELLELEATION METHODS for the Individual Trees using LiDAR Data
[PERSON]
Corresponding authora. Associate Professor, Dept. of Civil Eng. and Environmental Informatics, Ming Hsin University of Science and Technology, Hsinchu County 30401, Taiwan, [EMAIL_ADDRESS] b. Professor, Dept. of Forestry and Natural Resources, National Chiaxi University, Chiaxi, Taiwan, [EMAIL_ADDRESS] c. Assistant Professor, Dept. of Environmental Information and Engineering, Chung Cheng Institute of Technology, National Defense University, Taoyuan County 30401, Taiwan, [EMAIL_ADDRESS] d. CEO, LIDAR Technology Co., Ltd., Hsinchu County 30274, Taiwan, [EMAIL_ADDRESS]
[PERSON]
[PERSON]
[PERSON]
Corresponding authora. Associate Professor, Dept. of Civil Eng. and Environmental Informatics, Ming Hsin University of Science and Technology, Hsinchu County 30401, Taiwan, [EMAIL_ADDRESS] b. Professor, Dept. of Forestry and Natural Resources, National Chiaxi University, Chiaxi, Taiwan, [EMAIL_ADDRESS] c. Assistant Professor, Dept. of Environmental Information and Engineering, Chung Cheng Institute of Technology, National Defense University, Taoyuan County 30401, Taiwan, [EMAIL_ADDRESS] d. CEO, LIDAR Technology Co., Ltd., Hsinchu County 30274, Taiwan, [EMAIL_ADDRESS]
###### Abstract
Forest canopy density and height are used as variables in a number of environmental applications, including the estimation of biomass, forest extent and condition, and biodiversity. The airborne Light Detection and Ranging (LiDAR) is very useful to estimate forest canopy parameters according to the generated canopy height models (CHMs). The purpose of this work is to introduce an algorithm to delineate crown parameters, e.g. tree height and crown radii based on the generated rasterized CHMs. And accuracy assessment for the extraction of volumetric parameters of a single tree is also performed via manual measurement using corresponding aerial photo pairs. A LiDAR dataset of a golf course acquired by Leica ALS70-HP is used in this study. Two algorithms, i.e. a traditional one with the subtraction of a digital elevation model (DEM) from a digital surface model (DSM), and a pit-free approach are conducted to generate the CHMs firstly. Then two algorithms, a multilevel morphological active-contour (MMAC) and a variable window filter (VWF), are implemented and used in this study for individual tree delineation. Finally, experimental results of two automatic estimation methods for individual trees can be evaluated with manually measured stand-level parameters, i.e. tree height and crown diameter. The resulting CHM generated by a simple subtraction is full of empty pixels (called \"pits\") that will give vital impact on subsequent analysis for individual tree delineation. The experimental results indicated that if more individual trees can be extracted, tree crown shape will became more completely in the CHM data after the pit-free process.
Forest, Canopy, LiDAR, Pit-free, Stand-level +
Footnote †: footnoteinfo]This contribution has been peer-reviewed.
## 1 Introduction
A LiDAR system comprises multiple subsystems, namely a global positioning system (GPS), inertial measurement unit, and laser scanner. Currently, the large volumes of spatial data collected in a short period using LiDAR systems are typically employed for quantitative analyses and modeling in studies on geology, coastal erosion, and geomorphology. Airborne LiDAR technology can be used to collect multiple laser returns at pulse repetition rates up to 500 KHz. The positional accuracy of the resultant laser pulse return is typical at the decimeter level. Thus, the obtained standard products of an airborne LiDAR survey included all points, ground points, digital surface models (DSM), and digital elevation models (DEMs). Two types of airborne LiDAR systems are currently available; full-waveform (FW) and discrete-echo LiDAR. Regarding discrete-echo LiDAR systems, the return signal is filtered to export multiple echoes. For each transmitted laser pulse, only three to seven echoes are typically used to record the intensity and three-dimensional coordinates. However, FW LiDAR systems can record the entire waveform for each transmitted laser pulse. A waveform typically comprises responses per nanosecond. Therefore, a maximum of 255 echoes can be obtained. Thus, an unprecedented level of information can be preserved. Additional land information and high density point cloud data can be acquired, improving the accuracy of digital terrain models (DTMs). Because the reflected waveforms are affected by the type of material and properties of detected land objects, the waveform features can also be employed to analyze the characteristics associated with land cover change. LiDAR systems have been applied in close-range or aerial topographical surveys, to classify vegetation in forested areas, and map disasters ([PERSON], 1999). To estimate the forest canopy structures or individual crown structure by LIDAR technology, it is a very good method ([PERSON] et al., 2011; [PERSON] & [PERSON], 2013). For the single standing-based approach, canopy parameters of individual trees can be extracted from the Canopy Height Models (CHMs). Except for the tree height, there are some parameters such as crown width (CW), crown base height, and crown projected area (CPA) also can be estimated in the process ([PERSON] & [PERSON], 2005; [PERSON], 2007; [PERSON] et al., 2010; [PERSON] et al., 2011).
The purpose of this work is to perform a comparison of different CHMs generation methods for subsequent processes of the extraction of volumetric parameters of a single tree either directly or indirectly. Two algorithms, i. e. a subtraction of digital surface model (DSM) and digital elevation model (DEM), and a pit-free approachare conducted to generate the CHMs firstly. Then a multilevel morphological active-contour (MMAC) algorithm was used for individual tree delineation. Finally, the rasterized CHMs derived from two algorithms were compared with measured stand-level parameters, i.e. tree height and crown width (CW).
## 2 Methodology
### CHM Generation
The most simple CHM generation method is a subtraction of digital surface model (DSM) and digital elevation model (DEM). However, there are some improvement for the CHM generation, e.g. searching the highest LiDAR return in each grid cell, replacing each LiDAR return with a small disk, using TIN interpolation, and a pit-free method developed by [PERSON] et al. (2014).
A simple improvement can be obtained by replacing each LiDAR return with a small disk. After all, depending on the flying height the laser beam has a diameter of 10 to 50 centimeter and approximating this with a single point of zero area seems overly conservative. But this produces increasingly smooth CHMs with widening tree crowns. This \"splats\" the LiDAR returns into circles that are growing larger and larger than the laser beam diameter and thus have less and less in common with reality.
Another popular approach avoids this by interpolating all first returns with a triangulated irregular network (TIN) and then rasterizing it onto a grid to create the CHM. The result has no more empty pixels but is full of pits because many laser pulses manage to deeply penetrate the canopy before producing the first return. When combining multiple flight lines some laser pulses may have an unobstructed view of the ground under the canopy without hitting any branches. These \"pits\" and how to avoid them is discussed in great length in the September 2014 edition of the ASPRS PE&RS journal by a paper of [PERSON] et al. A LAStools developed by the Rapidlasso GmbH had implemented these ideas as the following. First, combining the '.highest' gridding and TIN interpolation. Next, the idea of expanding the points into circles with a diameter of 5 centimeter to account for the laser beam diameter by adding option \"-subcircle 0.025\" to \"lasthin\" function in the LAStools ([PERSON], 2014).
### MMAC Algorithm
The MMAC algorithm combines a multi-level morphological approach with the active contour model described previously. The algorithm proposed in this study is based on mathematical morphology (MM) concepts and uses an adapted watershed segmentation technique The MMAC algorithm is comprised of three steps. The first step uses bottom up erosion (BUE) to process CHM data and locate the stand candidates within a forested area. In the MMAC algorithm, stand candidates are identified by an iterative process in which image data is successively eroded from the bottom up until finally the highest points are determined. The second step uses a top down dilation (TDD) technique to estimate tree crown periphery points by growing outwards from stand candidate central points. The third step uses an active contour model (ACM) to modify the periphery points and delineate the contours of the tree crown boundary ([PERSON] et al., 2011; [PERSON] & [PERSON], 2013).
## 3 Case Study
### Study area
The test area located at a Golf course in the Hsinchu county. A Leica ALS 70-HP scanner was used in the experiment. The scanning date is Oct. 17, 2013, the flight altitude is 1295m, pulse rate is 210 KHz, and point density is 2.483 p/m\({}^{2}\). The acquired point cloud covered the study area is shown as Figure 1. Two test area (300m x 300m) are clipped from the original point cloud for the subsequently analysis. The clipped point clouds for the test area #1 and #2 are also shown as right parts in the Figure 2.
To test the accuracy of different individual crown delineation (ICD) methods, stereo measurement in the aerial image pairs is used to derive the volumetric parameters of 27 trees as a reference data in the test area #1, as shown in Figure 3. In the figure, red points represent horizontal position for the tre
Figure 1: The point cloud covering the study area
Figure 2: Two test areas in the golf course area bounded by the black line.
### Test results
#### 3.2.1 Results for the early study
A TreeTop application developed in a Web-LiDAR forest inventory project was conducted to perform individual tree delineation for the CHMs comparison ([PERSON] et al., 2015). Web-LiDAR was developed to support LiDAR-based forest inventory and management at Eglin Air Force Base (AFB), Florida, USA ([PERSON] et al., 2014). The ICD algorithm called VWF is performed using a threshold to find tree tops and using a variable window to describe the tree crown width (TCW) is a function of tree height ([PERSON], 2004). The individual tree detection results for the simple subtraction CHM and one for the pit-free CHM are shown as Figures 4 and 5, respectively. It shows too many duplicated tree tops as shown in the figures.
#### 3.2.2 Results for the MMAC method
Based on mathematical morphology (MM) concepts and an adapted watershed segmentation, the MMAC method had also been implemented in this study. The best results shown as Figure 6 can be derived using pit-free CHMs (points are replaced by ten cm circles) in the experiments. In Figure 6, duplicated treetops appeared in the VWF results had been significantly eliminated by the MMAC method. However, there are 9 trees which cannot be found using the MMAC method, as shown in brown circle in the figure. The accuracy of the method is about 67% though it still needs to be improved. Moreover, the poor result as shown in Figure 7 is obtained when the simple subtraction CHM used for the MMAC method. It indicates that \"holes\" in the CHM data without smoothing will produce artifacts, which cause less amount of treetops can be found ([PERSON] et al., 2011). More quantitative evaluations of methods will be performed in the future research.
## 4 Conclusions
Two individual tree delineation algorithms namely a multilevel morphological active-contour (MMAC) and a variable window filter (VWF) had been checked and evaluated using reference data derived from the stereo measurement in the aerial image pairs in this work. The extraction results indicate that there are too many duplicated tree tops found in the VWF method, even though smoothing procedure had been considered, using pit-free CHM data. However, the duplicated treetops appeared in the VWF results had been significantly eliminated by the MMAC method. of the accuracy of the MMAC method is 67% showing that the \"holes\" in the CHM data without smoothing will cause more crown fragmentation and thus reducing the accuracy in the individual crown delineation. More detailed comparison and field verification will be given in the future research.
## 5 Acknowledgments
The authors want to thanks for the financial support of Ministry of Science and Technology in Taiwan (Grant no. MOST 104-2119-M-159-001).
## References
* [PERSON] and [PERSON] (2005) [PERSON] and [PERSON], 2005. Estimating forest biomass using small footprint LiDAR data: An individual tree-based approach that incorporates training data, _ISPRS J. Photogramm. Remote Sens._, vol. 59, no. 6, pp. 342-360.
* [PERSON] et al. (2011) [PERSON], [PERSON] and [PERSON], 2011. A system for the estimation of single-tree stem diameter and volume using multireturn LiDAR data, _IEEE Trans. Geosci. Remote Sens._, vol. 49, no. 7, pp. 2479-2490, Jul. 2011.
* [PERSON] (2014) [PERSON], 2014. Rasterizing Perfect Canopy Height Models from LiDAR, Access via URL:rapidlasso.com/2014/11/04/rasterizing-perfect-canopy-height-models-from-lidar/
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2014. Generating pit-free Canopy Height Models from Airborne LiDAR, _Photogrammetric Engineering & Remote Sensing_, Vol. 80, No. 9, pp. 863-872.
* [PERSON] et al. (2010) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON], 2010. Estimating stem volume and biomass of Pinus koraiensis using LiDAR data, _J. Plant Res._, vol. 123, no. 4, pp. 421-432.
* [PERSON] et al. (2011) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], Ming-Shein, 2011. A Multi-level Morphological Active Contour Algorithm for Delineating Tree Crowns in Mountainous Forest, _Photogrammetric Engineering & Remote Sensing_, Vol. 77, No. 3, pp. 241-249.
* [PERSON] and [PERSON] (2004) [PERSON] and [PERSON], 2004. Seeing the trees in the forest: using lidar and multispectral data fusion with local filtering and variable window size for estimating tree height. Photogrammetric Engineering & Remote Sensing, 70(5):589-604.
* [PERSON] (2007) [PERSON], [PERSON] 2007. Estimating biomass of individual pine trees using airborne LiDAR, _Biomass Bioenergy_, vol. 31, no. 9, pp. 646-655.
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON], 2014. A Tutorial for \" Web-LiDAR forest inventory: TreeTop application\", Department of Defense Strategic Environmental Research and Development Program: Patterns and processes: monitoring and understanding plant diversity in frequently burned longleaf pine landscapes, PI: [PERSON]; Co-PIs: [PERSON], [PERSON], [PERSON].
* an introduction and overview,\" _ISPRS J. Photogramm. Remote Sens._, International Society for Photogrammetry and Remote Sensing (ISPRS), 54, pp. 68-82.
Figure 7: Extraction result by the MMAC method using the simple subtraction CHMs.
|
isprs
|
ACCURACY ASSESSMENT OF CROWN DELINEATION METHODS FOR THE INDIVIDUAL TREES USING LIDAR DATA
|
K. T Chang, C. Lin, Y. C. Lin, J. K. Liu
|
https://doi.org/10.5194/isprs-archives-xli-b8-585-2016
| 2,016
|
CC-BY
|
isprs/36ae0d60_d4a7_49e0_a5f6_022ba5631051.md
|
Diwara-2 Targeting Assessment and Attitude Error Determination Using a Quaternion-Based Transformation System
[PERSON]\({}^{1}\)
[PERSON]\({}^{1}\)
[PERSON]\({}^{1}\)
[PERSON]\({}^{1}\)
\({}^{1}\)[PERSON], STAMINASpace Program, University of the Philippines Diliman, Quezon City, Philippines (krmonay, fmolivar) @stamina4 space.upd.edu.ph, [EMAIL_ADDRESS], [EMAIL_ADDRESS]
###### Abstract
Target pointing assessment of a space-borne satellite is vital to its operations especially on microsatellites that have limited camera field of view and attitude control components like in the case of Diwara-2. In this study, two scientific payloads of the satellite were used: the Enhanced Resolution Camera (ERC) with a field of view (FoV) of 89.8 & 67.5 km and a resolution of 54.6m; and the High Precision Telescope (HPT) with a FoV of 3.1 & 2.3 km and a resolution of 4.7m. Errors in pointing especially on a payload with a small field of view like the HPT could mean the satellite missing its target. The target pointing of Diwara-2 is assessed by firstly, computing the differences in the coordinates of the planned target, the center of the actual image taken by the satellite and the projected target from the satellite's attitude logs. As such, a quaternion-based transformation system is created to simulate the satellite's local vertical local horizontal system from a given Earth-centered inertial system. Secondly, the differences were then tabulated, and its averages were computed to derived pointing corrections. Applying the algorithm to the satellite's images shows that there is an average error in pitch and roll of 0.590\({}^{\circ}\) and 0.004\({}^{\circ}\), 6.436\({}^{\circ}\) and 6.503\({}^{\circ}\), -5.846\({}^{\circ}\) and -6.499\({}^{\circ}\) between the set target to the actual image acquired, between the actual image and from attitude logs and between the set target and from the attitude logs, respectively.
Footnote †: slugcomment: The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume X.III-4/W19, 2019
## 1 Introduction
The PHL-Microstat Program launched its second microsatellite, Diwara-2, last October 29, 2018 at an altitude of 600 km. The microsatellite aims to monitor vegetation, assess damages and provide a way for communication during and after disasters. The Diwara-2 is equipped with four payloads similar to the preceding satellite, Diwara-1: the Space-borne Multispectral Imager with Liquid Crystal Tunable Filter (SMIL-LCTF) with a ground sampling distance (GSD) of 126.9 m and a field of view (FoV) of 83.7 & 62.7 km; the High Precision Telescope (HPT) with a GSD of 4.7 m and a FoV of 3.1 & 2.3 km; the Wide Field Camera (WFC); and the Middle Field Camera (MFC), which is an engineering payload with a GSD of 287.2 m and a FoV of 489.3 x 141.9 km. Two additional payloads were installed to help fulfill its objectives and to provide better service to the users. To improve the spatial resolution of the SMI-LCTF payload, the microsatellite is equipped with the Enhanced Resolution Camera (ERC), a panchromatic camera with a GSD of 54.6 m and a FoV of 89.8 x 67.5 km. To provide communication during disasters, an amateur radio unit is also installed with two modes: FM voice repeater mode and APRS digital voice repeater mode.
Since the microsatellites are mainly used for remote sensing purposes, the data it provides must be handled and calibrated well. Thus, there is a need for the satellite to capture its target. Currently, there are three general pointing modes for the Diwara-2: the nadir, off-nadir, and target mode. Diwara-2 enters its nadir mode when it takes images directly below the satellite as it moves in the orbit. Off-nadir pointing happens when the satellite tilts and captures images at an angle from the nadir as the satellite moves. Lastly, target pointing happens when the satellite fixed its target in a set point location in the ground as it moves along the orbit in order to take an image with the target in it. [PERSON], et al (2018) defines the pointing modes for Diwara-1, which is the same for Diwara-2.
Currently, Diwara-2 records its roll, pitch and yaw values that would help determine the image centers of the observation. [PERSON], et al (2018) explains the Attitude Determination and Control System (ACDS) of the Diwara-1; and it applies as well to the Diwara-2. The ACDS determines and controls the satellite's attitude and pointing mode. A GPS module determines the satellite's position and velocity vectors. Attitude is determined through Coarse and Fine attitude determination. The former uses the relative illumination of the sun to the satellite (Sun Aspect Sensor) and the magnetic field intensity around the satellite (Geomagnetic Aspect Sensor) to determine the satellite's attitude. The latter uses a photo of the stars in its field of view and a star map as reference for the captured photo ([PERSON], et al, 2018). These attitude values are then projected into the Earth thus determining the image centers.
After determining the satellite's attitude, its error's effect on the pointing of the satellite should be known. Pitch errors generally introduce errors along the trajectory. Roll errors, on the other hand, introduce errors perpendicular to the trajectory. Yaw errors introduce the image orientation error. The latter introduces few to small changes with respect to the center of the image recorded by the satellite. Thus, this study was conducted to determine the current attitude errors, specifically pitch and roll, of the sensors of Diwara-2 in order to execute its necessary calibration to obtain better data quality to achieve its purpose. Since the microsatellites are not subject to maintenance, knowing the attitude errors from the target to the actual image is necessary so future targets are adjusted depending on the acquired errors.
[PERSON], et al (2018) determined the attitude errors of the Diwara-1 images. These errors were attributed to the sun azimuth, sun elevation, and the Earth's magnetic field due to their effects on the Sun and Geomagnetic Aspect Sensors. The average pointing error of the Diwara-1 is 20 kilometers. For the purposes of this paper, the error along the trajectory will be defined as timing error, as this theoretically can be fixed by applying corrections to the timing of acquisition. The error along the line perpendicular to the trajectory will be defined as the pointing error.
This study uses three coordinate systems to determine the pitch and roll angles of the satellite to a point on the Earth. The Earth-Centered Inertial (ECI) system considers the Earth's center of mass and its primary vectors fixed for the celestial sphere ([PERSON], [PERSON], 2001). This system is primarily used to determine the position of satellites with respect to the celestial sphere. As such, Diwata-2's position and velocity vectors use the ECI system. The paper also uses the Earth-Centered, Earth-Fixed (ECEF) system which considers the Earth's center of mass but with its primary vectors fixed with the Earth ([PERSON], [PERSON], 2001). This system differs from the ECI system due to the latter considering the motion of the celestial sphere relative to Earth. Thus, the rotation and position of the planet on a given date and time is considered for the position of an object in an ECI system. ECEF, on the other hand, do not consider the motion of the Earth and its positions are independent of the time and date. As such, geodetic coordinates of points such as targets and image centers are to be converted from the ECEF system to ECL. Lastly, the Local Vertical, Local Horizontal (LVLH) system is the satellite's local coordinate system with one primary vector parallel to the satellite's orbit path and another to its nadir. This system is helpful in determining the satellite's pitch and roll values for the study.
Conversion to different coordinate system require rotations of vectors. The most common parameters of representing vector rotation are the direction cosine matrix, Euler angles and quaternions.
Direction cosine matrix (DCM) describes a rotation through direction cosine values from its initial coordinate system to its target coordinate system. These direction cosine values are derived from the cosine of the angles between the vector and the three coordinate axes. Euler angles describes a 3D rotation through a sequence of 2D rotations from about each of the three coordinate axes. ([PERSON], 2015)
Quaternions describe a rotation by a rotational angle from a rotational axis which is not necessarily around and about the three coordinate axes like the Euler angles, it rotates a vector around an inertial coordinate system. ([PERSON], 2011; [PERSON], 2011). A quaternion is composed of four components, 1 real component and 3 imaginary components. It can be expressed as:
\[Q=q+q_{o} \tag{1}\]
where Q = quaternion
Q = real component/scalar component
q-\(z\) imaginary component
The imaginary component (q-) can be further expressed through 3 imaginary axes (i,j,k).
\[q_{o}=q_{i}l+q_{2}j+q_{3}k \tag{2}\]
where q-, q-, q- are scalar values.
Rotation described through quaternions is expressed in a single equation:
\[T=Q\times t\times Q^{*} \tag{3}\]
where T = rotated vector
Q = quaternion
t = vector in the inertial coordinate system
Q* = conjugate quaternion
The conjugate quaternion is defined as:
\[Q^{*}=q-q_{0} \tag{4}\]
where q = real component/scalar component
q-\(z\) imaginary component
Quaternion based methods has been studied for its application in rotation ([PERSON], 2011) and for its application in spacecraft attitude determination and control ([PERSON], 2011). Several advantages of using quaternions compared to other types of conversion were also stated in these studies. Quaternions are more compact and executes less computation compared to DCM. ([PERSON], 2011) Through quaternion, controllers can globally stabilize nonlinear spacecraft system while in using Euler Angles may not stabilize such spacecrafts because it uses a linear model ([PERSON], 2011).
With Diwata-1 nearing the end of its lifespan, it is imperative to assess the targeting errors of the Diwata-2 to give more images capturing the intended targets. Thus, the main objective of the study is to create a working transformation system to determine attitude values from the satellite's position and velocity vectors, the coordinates of the targets, and the date and time of observation, and the calibration values for target setting.
## 2 Methodology and Results
### Methodology
Several missions were conducted to assess the current attitude of Diwata-2 images. Images were captured in different places that were in the path of the microsatellite with different pointing methods; nadir, off-nadir, and target pointing. These images were then used to determine the pointing and timing errors by determining the distance between the image center, attitude log, and the target location. The timing error is the distance between the image center and the target along the path of the microsatellite. The pointing error is the distance between the image center and the target along the line perpendicular to the path of the microsatellite.
To determine the attitude errors, it is necessary to convert the different coordinate systems to the satellite's LVLH system. The satellite's position and velocity vectors come from its installed GPS module, and is in the ECI coordinate system, with its reference vectors be (1, 0, 0), (0, 1, 0) and (0, 0, 1). These vectors were then used to create the satellite's LVLH coordinate system by making the position vector the Z-axis. Cross-multiplying the Z-axis to the satellite's velocity vector creates a vector (X-axis, or the cross-track axis) perpendicular to the velocity vector. Finally, multiplying the Z-axis and the X-axis gives the Y-axis or the along-track axis of the satellite.
The images used were data products of Diwata-2's Enhanced Resolution Cameras (ERC) and the red sensor of its HPT (HPT-R). The image centers were converted from latitude and longitude values (ECEF system) into the ECI system, using its capture date.
In this study, three targets are to be determined. The programmed target refers to the target coordinates uploaded to the satellite. The projected target refers to the target coordinates projected by the satellite's pitch, roll and yaw values at the time of image capture. Lastly, the actual target is the center of the image captured by the satellite. The programmed target is important as this is the location the user wants to capture. As such, the errors in targeting will be referred to the programmed target. Defining the errors in the projected target is also helpful as this determines the status and capability of the satellite to record accurate values of its attitude.
The differences in coordinates between the targets (both actual, projected and programmed) and the satellite position vectors were then transformed into the satellite's LVLH system by applying a quaternion. Since the reference axes of the ECI and the LVLH systems are already defined, it is necessary to determine the quaternion converting the ECL coordinate system to LVLH. [PERSON], [PERSON] (1992) defines a method of extracting transformation parameters between two sets of points by minimizing a mean square objective function during registration. These transformation parameters contain an axis to where the points are to be rotated and the angle of rotation. These parameters can be expressed as a quaternion. After extraction, the vectors from the satellite position to the targets are then converted by applying equation 3.
Finally, given the difference in position in the new coordinate system, the pitch and roll values were determined by taking the angle forwards and backwards, or sideward, respectively, from the satellite position to the target.
To check the values of the code, the position and path of the Diwata-2 is determined through the Satellite Tool Kit (STK) software and the measurements of the timing and pointing errors are compared to the projection of the attitude errors by multiplying their tangents to the altitude of the Diwata-2.
### Results and Discussion
Tables 1 and 2 list the pitch and roll values (in decimal degrees) for the targets and the image centers while tables 3 and 4 shows the pitch and roll error values (in decimal degrees) for the ERC and HPT-R, respectively. For tables 3 and 4, the first column shows the errors captured and the pointing system used. The second column shows the error between the programmed target and the actual target. The third column shows the error between the projected target and the actual image center. The last column shows the difference between the programmed target and the target projected by the recorded attitude values of the satellite. Positive pitch values denote forward tilt while positive roll values denote a counterclockwise tilt. For missions with targeting mode, the pitch and roll values are the average of the entire mission.
Using the program that we developed, we were able to obtain the attitude of the satellite sensor on the different missions. It is noticeable that the error between the programmed target and the resulting image center is much smaller than the error between the projected target and the resulting image center. It is also worth noting that the error between the projected target and the actual image center is quite large. The large difference is due to measurement errors from the satellite's attitude sensor sources in the attitude. With a timing and pointing error of 8.09s and 62.05 km, respectively, for the ERC images and 9.49s and 74.75 km for the HPT-R, there seem to be disturbances in the ACDS. Future studies may determine the cause of the large discrepancy.
Figures 1 to 5 show the position of the programmed and projected targets and the actual image centers of the ERC per area, together with the Diwata-2 position at the time of image capture. ERC images are used as the difference between the HPT-R and ERC image centers are relatively small. Figures 6 to 10 show the timing and pointing errors (from image center to programmed target) for the ERC images for each mission. The tabulated timing and pointing errors are in tables 6 and 7 for the ERC and the HPT-R, respectively. The direction of the Diwata-2 in the captured images is downward along the path. Thus, the positive timing error values correspond to the error pointing to the direction of the Diwata-2 trajectory (thus downward). On the other hand, the positive pointing error values correspond to the error pointing to the right side of the trajectory (which on the image is on the left side).
Figure 4: Manila, Philippines. Relative positions.
Figure 5: Damascus, Syria. Relative positions.
Figure 6: Tripoli, Lebanon. Distance from programmed target to actual image center.
Figure 7: Dubai, UAE. Distance from programmed target to actual image center.
Figure 8: Muscat, Oman. Distance from programmed target to actual image center.
Figure 9: Manila, Philippines. Distance from programmed target to actual image center.
For the ERC images, the average pitch and roll errors from the programmed target to the actual image center introduce a 1.05s timing error and a 2.0 km pointing error, respectively. For the HPT-R images, its average timing and pointing errors are 0.27s and 1.92 km, respectively. These are almost acceptable errors, as it is near the swath of the HPT images (3 km x 2 km). However, this applies only to the average errors. Most of the data used has errors exceeding the HPT swath but cancels out due to the direction of the error. Therefore, the attitude errors would cause the satellite to miss its intended target, and the non-uniformity of error directions would make it hard to determine a single value to correct the said errors.
It is worth noting that the pitch and roll values computed by the program for the targets are not the same as the pitch and roll values apparently shown in the images where the Diwata-2's position is determined by the STK through TLE propagation. As an example, the computed pitch value for the programmed target in Tripoli, Lebanon shows a positive value, suggesting that the said target is found when the satellite tilts forward. This is not the case in the image where the said target is found when tilting backwards. This suggests a discrepancy between the STK positions, and the position and velocity vectors recorded by the satellite's GPS module. Further investigations would show that the pitch and roll error values (or the distances between targets) are still correct thus the discrepancy lies in the Diwata-2's position due to relative positioning. Future studies could expand on this.
Despite the discrepancy, the error values extracted through measurements are almost the same to the error values extracted through the transformation system. Therefore, a working transformation system for attitude determination has been created and can be used to determine attitude values for targets. Furthermore, pointing errors computed from the method are angle based and are not affected by the varying altitude of the satellite. Figure 11 shows the effects of varying altitude to the point angle required to compensate for errors from 2 km to 20 km of the satellite. As an example, altitude variation in the range of 10 km affects the pointing angle requirement to compensate for the pointing error of 20 km by 0.07\({}^{\circ}\) which is around 700 km if translated to ground distance.
## 3 Conclusions
Since a microsatellite does not undergo maintenance after launch, it is necessary to know its current health as future use considers the current status of the microsatellite. As in the case of Diwata-2, it is necessary to know the targeting errors so necessary adjustments can be made. Knowing that there is a disturbance in the ACDS would also lead to a better understanding of the satellite's current health and capabilities.
This study has created a system of transformation through the use of quaternions and thus determined the pitch and roll errors between the programmed target, predicted target and the actual center of the payloads. The use of quaternions in the transformation system simplified the equations used. As compared to using Euler angles and direction cosine matrices, quaternions can easily define coordinate transformation between multiple coordinate systems. Corrections derived from the results of the quaternion-based methods are not altitude dependent compared to corrections derived from computing the ground distance differences in latitude and longitude used on previous target assessments. The method fits Diwata-2 as its altitude varies by around \(\pm\) 10 km.
Although the computed values are small, these can still be adjusted when uploading targeting commands to provide a more exact image. However, careful calibration is recommended as the attitude errors do not seem to follow a definite pattern to be corrected. For attitude determination to improve, the position and velocity vectors of the satellite should be exactly known. Also, since these values are now known, future use of Diwata-2's targeting capabilities can be put to better use for other projects such as lunar calibration and change detection, which leads to better output of the satellite.
The transformation system created has discovered problems that can be studied in the future. Issues such as the Diwata-2's position and velocity vector discrepancies, and the issues in the satellite's recorded attitude values (thus, making the projected targets far from the actual image centers) need to be studied further in the future for possible causes and solutions. Also, since this study only considers the actual image centers of the camera products, other issues that change the position of the image center should be considered for future studies, such as relief displacement.
Figure 11: Pointing angle variation on different errors
\begin{table}
\begin{tabular}{|c|c|c|} \hline Muscat, Oman & -3.446 & -2.447 \\ \hline Damascus, Syria & -22.597 & -19.485 \\ \hline Average & 2.081 & -2.583 \\ \hline \end{tabular}
\end{table}
Table 7: Timing/Pointing Errors for HPT-R
## References
* [PERSON] (2015) [PERSON] ([PERSON]). \"A Comparison of Rotation Parametrisations for Bundle Adjustment.\" 2015.
* [PERSON] et al. (1992) [PERSON], [PERSON], and [PERSON]. \"Method for Registration of 3-D Shapes.\" _Sensor Fusion IV: Control Paradigms and Data Structures_, 1992, doi:10.1117/12.57955.
* [PERSON] et al. (2012) [PERSON], and [PERSON]. \"Introduction into Quaternions for Spaecrnaft Attitude Representation.\" 31 May 2012, www.tu-berlin.de/fileadmin/fg169/miscellaneous/Quaternions.pdf.
* [PERSON] et al. (2018) [PERSON] [PERSON], et al. \"Divata-1 Target Pointing Error Assessment Using Orbit and Space Environment Prediction Model.\" _2018 IEEE International Conference on Aerospace Electronics and Remote Sensing Technology (ICARES)_, 2018, doi:10.1109/icares.2018.8547062.
* [PERSON] (2011) [PERSON] \"Quaternion and Its Application in Rotation Using Sets of Regions.\" 2011.
* [PERSON] (2001) [PERSON], and [PERSON]. \"Reference Systems in Satellite Geodesy.\" 2001.
* [PERSON] (2011) [PERSON]. \"Spacecraft attitude determination and control.\" 2011.
|
isprs
|
DIWATA-2 TARGETING ASSESSMENT AND ATTITUDE ERROR DETERMINATION USING A QUATERNION-BASED TRANSFORMATION SYSTEM
|
K. Monay, F. R. Olivar, B. J. Magallon, M. E. A. Tupas
|
https://doi.org/10.5194/isprs-archives-xlii-4-w19-305-2019
| 2,019
|
CC-BY
|
isprs/def39c48_9f2c_4e48_acd7_c81b1fe7083c.md
|
# Results of using Spectroradiometers for in Soil Moisture of Mongolian Steppe Ecosystem
[PERSON]\({}^{1,3}\)
[PERSON]\({}^{2,3}\)
[PERSON]\({}^{3}\)
[PERSON]\({}^{1}\)
[PERSON]\({}^{2}\)
\({}^{1}\)Mongolian University of Life Sciences, Darkhan-Uul-45051, Mongolia - [EMAIL_ADDRESS]
\({}^{2}\)Institute of Geography and Geoecology, Mongolian Academy of Sciences, Ulaanbaatar 15170, Mongolia - [EMAIL_ADDRESS]
\({}^{3}\)Mongolian Geo-spatial Association, Ulaanbaatar-15141, Mongolia, - [EMAIL_ADDRESS]
###### Abstract
Determining soil moisture remains an important issue in environmental, agricultural, and crop fields. The use of remote sensing data in the study of soil surface moisture has many benefits, such as: saving time, resources, and money, as well as monitoring soil moisture. Based on point data from field measurements, can be used to obtain results of soil moisture. The importance of this study is highlighted by the used spectral measurements and laboratory-determined moisture content of a variety of soils with different geographical conditions spread over three different natural zones, combined with satellite data. The purpose of this research is to determine and analyse surface soil moisture using Sentinel-2 satellite data and Spectro radiometric measurements. For this research area, four areas representative of forest steps, steppe, and Gobi eco-regions of Mongolia were selected, field research was conducted, data were collected, and Sentinel-2 satellite data was processed and analysed. According to the results of the spectral reflectance measured during the field research, the high reflectance of the wet soil spectrum in the forest field, and the low reflectance of the spectrum representing the Gobi region, respectively, this create the possibility of studying soil moisture by spectral measurement and analysis. The relationship between the spectral reflectance of the red (665 nm) and near-violet (842 nm) light regions shows that the soil of the forest-steppe region is relatively moist compared to the soil of the Gobi region and that the Gobi soil is drier, and the field soil is drier, and these parameters are confirmed by laboratory analysis. A model developed from a linear relationship between field-measured spectrometer reflectance and laboratory-determined moisture provides a formula that can be used for regular monitoring of moisture content using the Normalized Difference Water Index (NDWI) estimated from satellite data. Surface soil moisture (%) = -44.133*NDWI - 11.55. The NDWI calculated from the spectroradiometer compared to the humidity determined in the laboratory had a linear correlation of R2=0.74 or 74%, and when comparing the humidity calculated from Sentinel-2, the agreement was high. In terms of space, the results calculated from satellite data show that the forest and steppe areas have relatively high humidity compared to the Gobi region, and the total area representing the Gobi region has a lower moisture content. Based on these results, it is considered that it is possible to continuously determine the amount of moisture from satellite data using NDWI (surface soil moisture) in a certain space-time range and use it for soil control and monitoring.
S 2023, 2-7 September 2023, Cairo, Egypt 2023
their spatial variability, and soil quality and stability. Conventional methods are very expensive, labour-intensive, time-consuming. Although they are suitable for determining information by sampling in a small area they do not provide any information on the spatial variation of soil properties. Correct estimation of soil properties considering spatial variation has become an important issue for soil mapping, soil fertility management, sustainable land management, and precision agriculture. The use of spectral multi-channel satellite data is ideal for predicting various soil fertility characteristics, such as surface soil moisture data. Spectral reflection properties are greatly influenced by biogeochemical (mineral and organic) components and surface moisture content ([PERSON] et al. 1999). Spectral analysis methods have been used in soil science research since 1950s and 1960s ([PERSON], 1952; [PERSON] and [PERSON], 1965). The benefits and importance of using spectral analysis in combination with satellite data are widely used nowadays. In addition to the use of electromagnetic waves in the visible and near-infrared (VIS-NIR- 400-2500 nm) regions ([PERSON] et al. 2001; [PERSON] et al. 2002), spectral analysis methods are increasingly being used in the mid-infrared. (2500-25000 nm) ([PERSON] et al. 2006). Visible and short-wavelength violet-red region spectra are more commonly used to determine soil moisture. As soil moisture increases, the value of reflectance measured in the visible color and near-infrared ranges showed a tendency to decrease ([PERSON] et al. 2007). The results of measurements in the visible and near-infrared spectral ranges of soil moisture modeling have been detailed in several studies, such as vegetation index ([PERSON], 1996), soil moisture reflectance method ([PERSON] et al. 2007) have published the results of identifying dry and wet soils using the red and near-violet ranges of visible color, respectively. In Mongolia, there are research works that determine soil moisture using satellite data ([PERSON] et al. 2021; [PERSON] et al. 2021; [PERSON] et al. 2009), but not many published works that use it in combination with spectrodamometric measurements. In this study, a model was created to determine surface soil moisture using optical satellite data and spectral results measured in the field.
### Research goals and objectives
**Purpose**: To determine the soil moisture of the surface to analyze using Sentinel-2 satellite data and Spectroradiometric measurements.
**Objective**:
* Considering different landscape and ecological conditions, the results of spectroradiometric measurements and determination of soil moisture in laboratory conditions are selected, and natural soil moisture is compared with spectral measurements.
* Develop a model for determining surface soil moisture by remote sensing.
* Data from the Sentinel-2 satellite corresponding to the selected points' positions will be downloaded, archived and processed. Produce a map showing surface soil moisture expressed in space.
## 2 Material and Methods
### Study area
Four areas representative of the forest-steppe, steppe, and Gobi regions of Mongolia were selected for the research area. Field research was conducted, data was collected, and Sentinel-2 satellite data was processed and analyzed (Figure 1). This research was carried out in the following geographical areas. It includes:
* 109.84\(\lx@math@degree\)E, latitude 46.84\(\lx@math@degree\)N
- 47.81\(\lx@math@degree\)N, 250 km southeast of Ulaanbaatar city, and 125 km west of Khenti province.
* 107.69\(\lx@math@degree\)E, latitude 45.94\(\lx@math@degree\)N
- 46.94\(\lx@math@degree\)N, 210 km south of Ulaanbaatar city, 70 km northeast of the center of Dundgov province.
* 106.41\(\lx@math@degree\)E, latitude 45.03\(\lx@math@degree\)N
- 46.03\(\lx@math@degree\)N, 320 km from Ulaanbaatar, 55 km from Dundgov Province, in the territory of Khanhongor Sum, Monugoyo Province, longitude 103.76\(\lx@math@degree\)E
\t- 105.1 2\(\lx@math@degree\)E, latitude 43.26 \(\lx@math@degree\)N
- 44.23\(\lx@math@degree\)N, located 532 km from Ulaanbaatar city and 25 km from the center of Uumogoi province.
### Methodology
The research was conducted through the stages of preparatory work, field research and data collection, data processing, and analysis. Within these stages, data and information were processed results were obtained. It includes:
* Preparation stage
* _Research and study the topic, using academic articles and books_
* _Download and archive Sentinel-2 satellite news_
* _Create research software_
* _Develop field research methodology and select representative sites_
* Field research and data collection phase
* _Condacs field research in July and August 2021 using the data collected during the production internship._
* _Develop soil moisture report_
* _Collects general soil information_
* _Conduct spectral measurements of soil surface and 10 cm shear_
* Data processing and analysis stage
* _Compare spectral measurements and soil moisture_
* _Develop a model for determining surface soil moisture by remote sensing_
Figure 1: Field study area.
* _Download data from the Sentinel-2 satellite corresponding to the locations of the selected points, archive and process sequence such as by creating a spatially expressed soil surface moisture image._
**Satellite data: (_Sentinel-2 Multispectral Instrument (MSI))_** satellite data with high resolution in terms of spatial and temporal resolution were used in the research. 10-60m resolution for spectral multi-channel space that maps in the short-wave violet-infrared range (350-2500 manometers). In terms of time, it is a twin satellite platform that collects data at a frequency of 5 days (ESA).
Sentinel-2 consists of satellites 2A and 2B which collect high spatial resolution and multi-spectral data, and one of the Sentinel series of satellites launched by the European Space Agency (ESA). is possible.
Archived Sentinel-2 satellite level-2A (L2A) data from August 5-15, 2021, during a low-cloud, fire-free period close to the time of the field study, at ESA for free download and archived use (time and location of Sentinel-2 satellite data from the study area are shown in Figure 2).
**Software:** SNAP software was used for digital satellite data and image data processing, and excel for statistical processing. The user license for the satellite data processing software (SNAP) is open source and available for free download.
**Field data collection:** For the field study, Delgerkhaan and Jargalthan **sums of Khentil province represent the forest steppe zone**, Delgerstogt of Dundgov province represent **the steppe zone, Deren sum of Dundgov province, Luus of Dundgov province and Khanhongor sum of Umungov province are selected respectively (locations are shown in Figure 1 and 2).
The field survey was conducted between August 13-22, 2021, and total of 15 points were cut at depth of 0-10 cm. Filed survey amples' location, spectral measurements, and records shown in Table 1.
**Spectroradiometric measurement:** Remotely sensed data were collected using a hand-held spectroradiometric instrument in the field, and 0-10 cm section was selected to measure the soil spectrum in the exposed area. As for spectroradiometric instruments, the 350-2500 manometer wave spectrum is used for measuring in the range of visible light, near violet-infrared rays, medium and short-wave violet rays, and automatically leveling and accurately measuring electromagnetic waves when the air temperature is between -10 0 C and -40 0 C instruments with measurements were used. The spectroradiometer measures at a certain degree angle (Figure 3). Measurements were made at 20 meters above the ground in the selected area, and the reflectance corresponding to 350-2500 nanometer waves was measured.
Adjustment using a white plate, match the time when the sun is as high as possible, between 11:00 and 14:00, without clouds, without rain, and with low wind has been done before each spectrum measurement.
\begin{table}
\begin{tabular}{|p{42.7 pt}|p{42.7 pt}|p{42.7 pt}|p{42.7 pt}|} \hline No & **Natural belt** & **Point number** & **Point exchange** \\ \hline
1 & Forest steppe zone & 5181 & 109\({}^{\circ}\)2344.331\({}^{\circ}\)E 47\({}^{\circ}\)3515.956\({}^{\circ}\)N \\ \hline
2 & & 5182 & 109\({}^{\circ}\)846.750\({}^{\circ}\)E 47\({}^{\circ}\)2119.684\({}^{\circ}\)N \\ \hline
3 & & 5183 & 109\({}^{\circ}\)842.240\({}^{\circ}\)E 47\({}^{\circ}\)2117.395\({}^{\circ}\)N \\ \hline
4 & & 5184 & 109\({}^{\circ}\)837.409\({}^{\circ}\)E 47\({}^{\circ}\)2115.608\({}^{\circ}\)N \\ \hline
5 & & 5185 & 109\({}^{\circ}\)829.944\({}^{\circ}\)E 47\({}^{\circ}\)2112.445\({}^{\circ}\)N \\ \hline
6 & Steppe zone & 5261 & 106\({}^{\circ}\)273.887\({}^{\circ}\)E 46\({}^{\circ}\)1427.247\({}^{\circ}\)N \\ \hline
7 & & 5262 & 106\({}^{\circ}\)275.060\({}^{\circ}\)E 46\({}^{\circ}\)1432.914\({}^{\circ}\)N \\ \hline
8 & & 5263 & 106\({}^{\circ}\)277.75\({}^{\circ}\)E 46\({}^{\circ}\)1439.278\({}^{\circ}\)N \\ \hline
9 & & 5264 & 106\({}^{\circ}\)2778.430\({}^{\circ}\)E 46\({}^{\circ}\)144.234\({}^{\circ}\)N \\ \hline
10 & & 5265 & 106\({}^{\circ}\)27710.863\({}^{\circ}\)E 46\({}^{\circ}\)144.997\({}^{\circ}\)N \\ \hline
11 & Govic zone & S240 & 104\({}^{\circ}\)4526.737\({}^{\circ}\)E 43\({}^{\circ}\)360.583\({}^{\circ}\)N \\ \hline
12 & & 5241 & 104\({}^{\circ}\)4522.467\({}^{\circ}\)E 43\({}^{\circ}\)3558.847\({}^{\circ}\)N \\ \hline
13 & & 5242 & 104\({}^{\circ}\)4346.404\({}^{\
**Sampling of the soil:** Select representative point of the area, dig 0-10 cm, use cylinder of 100 m \({}^{3}\) to take sample, put it in moisture-proof sample bag, record the sample number, store it in cool environment, and get the results for analysis in the laboratory (Figure 4).
The results of the samples taken at each soil research point, its spectrum measurements, field recording, are presented in combined data finding section. The field measurement results including spectroradiometer measurements, field recording results, spectrum measurement diagrams, and photographs show the field and environmental conditions where the measurements were taken.
**Satellite data / Sentinel-2 / processing and analysis:** The digital raw data measured in the red-green-blue regions of the visible color were given to the red-green-blue color of the image processing by the color combination method and the actual surface color of the research area was extracted and the general condition was obtained.
Using the color composite or RGB creation command of SNAP image processing software, channels 12, 8, and 3 of Sentinel-2 were given red, green and blue colors, and an image with a color combination indicating the condition of the soil was processed. The area with soil appears in light pink color in the picture.
**Index:** The Normalized Water Index (NDWI) is the primary and basic index for water indices and has been used extensively in a wide variety of applications ([PERSON], 1996). It is often used to detect water areas and lakes from satellite data and to calculate water areas. In this study, we used field-measured spectroradiometric measurements and spectral reflection of Sentinel-2 satellite data to determine surface soil moisture.
Normalized Difference Water Index:
\[NDWI\ =\ \frac{(Green-NIR)}{(Green+NIR)} \tag{1}\]
\[Spectroradiometer\ NDWI\ =\ \frac{(560\ nm-642\ nm)}{(560\ nm+642\ nm)} \tag{2}\]
\[Sentinel-2\ NDWI\ =\ \frac{(Band3-Band8)}{(Band3+Band8)} \tag{3}\]
The Normalized Moisture Index (NDMI) is the primary and base index for moisture indices and has been used extensively in a wide variety of applications ([PERSON], 1996). It is an index that is often used to determine the moisture content of plant leaves. In this study, we used a combination of field spectroradiometric measurements and spectral reflectance data from the Sentinel-2 satellite.
Normalized Difference Moisture Index:
\[NDMI\ =\ \frac{(NIR-SWIR)}{(NIR+SWIR)} \tag{4}\]
\[Spectroradiometer\ NDMI\ =\ \frac{(942 nm-1610\ nm)}{(842\ nm+1610 nm)} \tag{5}\]
\[Sentinel-2\ NDMI\ =\ \frac{(Band8-Band\ 11)}{(Band\ 8+Band\ 11)} \tag{6}\]
the image processing software SNAP, the normalized water index (NDWI) shown in Equations (3) and (6) is calculated using the Sentinel-2 satellite's green channel 3 and near-infrared channel 8, and near-infrared It was calculated using the 8 th channel of the infrared ray and the 11 th channel of the short-wave violet-red ray respectively. The result map is showing the state of the soil surface. L2A data from Sentinel-2 satellites have undergone atmospheric processing and primary processing, so they will be directly processed and used.
Color soil moisture can be deciphered using spectral reflectance in the red and near-violet regions, and it was recognized by the results of study in China that low reflectance values in the red region indicate moist soil and high reflectance values indicate dry soil ([PERSON] et al. 2007). A comparison of the spectral reflectance at 665 nm from our field study with natural moisture determined in the laboratory also confirmed this result. In the study of Chinese researchers ([PERSON] et al. 2007), the spectral reflectance of dry soil is relatively higher than moist soil ([PERSON], BP 2018). Hyperspectral analysis of soil properties for soil management. Advances in Agriculture for Sustainable Development, 59-65.
**Model development for estimating soil moisture:** When developing a moisture calculation model using color soil spectroradiometer measurement data and Sentinel-2 satellite data (Figure 5), the natural soil moisture determined by laboratory analysis is taken as basic information, and corresponding to each measurement point (data from the time of field measurement) spectral measurement, from satellite news NDWI and NDMI were calculated respectively. Statistical processing of natural soil moisture determined by laboratory analysis based on NDWI and NDMI values was performed using Excel software to derive a formula for calculating surface soil moisture using linear regression analysis, as well as using that formula to calculate surface soil moisture from satellite data.
\[Soil\ Surface\ Moisture\ (\%)=\ -44.133\ *\ NDWI\ -\ 11.553 \tag{7}\]
Soil moisture was calculated using the Band Maths command of the SNAP software using the methodology shown in Equations (3) and (6) (this simulation has a 75% confidence level, making it suitable for use in forecasting surface fertility).
**Interpretation of soil laboratory results:** The results of the soil laboratory analysis are described in the results section according to the following categories.
**Classification of soil humus content** (nomenclature changes depending on the nature zone): More than 5% is Black soil, meadow swamp and alluvial soil, 3-5 % Dark brown, 1-3 % Brown, 0.5-1.0 % Brown, Light brown, 0.1-0.5 % Desert brown-gray soil ([PERSON], 2003).
**Soil Reaction Environment** (NRCS Soil Survey Handbook): Strong acidity 5.1-5.5, Moderate acidity 5.6-6.0, Weak acidity 6.1-6.5, Neutral 6.6-7.3, Weakly alkaline 7.4-7.8, Moderately alkaline 7.9-8.4, Strongly alkaline 8.5-9.0, Very strongly alkaline \(>9.0\).
Figure 4: Photo of soil samplingSoil salinity: Describe soil salinity (EC-dS/m) depending on part of categories belongs to: 0 \(<\) 2 dS/m No salt, 2 \(<\) 4 dS/m Very weak salinity, 4 \(<\) 4 dS/m Low salinity, 8 \(<\) 16 dS/m Moderate salinity, \(\geq\) 16 dS/m High salinity (NCRS Soil Survey Handbook). In the category of high salinity, it is likely to occur only in Gobi, dry deserts and saline valleys.
Phosphorus and potassium content of the soil: comparing the classification shown in Table 2 with the results of the laboratory analysis, high, low and medium level is determined and the results are presented according to the classification of Machign.
## 3 Result and Discussion
The results of soil field measurement samples analyzed in the laboratory are summarized in the following tables according to the physical and chemical properties of the soil (Table 3).
Figure 7 shows the spectral measurement results of points representing the forest-stepe zone. As seen from the figure, S182, S184, and S185 points showed similar reflectance values, while S181 showed the highest reflectance or relatively low natural moisture (Table 4). For spectral measurements representing the field region, the reflectance of soil points S261, S262, S263, and S264 is close, and the spectral reflectance of soil point S265 is relatively high. Most of the parameters of the laboratory analysis of the points in the field zone showed
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{**Point**} & \multicolumn{6}{c|}{**Point-dependent modelling for the height of the atmosphere**} \\ \multicolumn{2}{|c|}{**Point-dependent modelling for the height of the atmosphere**} & \multicolumn{2}{c|}{**Pointsimilar values, and the content of mobile phosphorus in S265 was higher than other points in the field zone of 2.3 mg/100 g (Table 4). However, different spectral reflectance was shown for the soil points representing the Gobi region. The spectral reflectance of soil points S240 and S2041 showed higher reflectance than other points, and salinity and carbon content showed slightly different values from other points (Table 4). Soil site S265 showed relatively low reflectance compared to other sites and the lowest calcium content. Refereeing from these comparison diagrams, the spectral reflectance represents the characteristics of the soil at that point. Short-wave near-violet rays (800-2500 nm) represent soil fertility information while visible colors and near-violet rays (490-800 nm) indicate moisture content.
### Comparison analysis of spectroradiometric spectral measurements with laboratory analysis
Table 5 shows the results of laboratory analysis, moisture, bhumus, pH, spectral measurement values measured in the red (665 nm), near-violet (842 nm) range, NDWI and NDMI calculated from the spectroradiometer, and correlation diagrams are shown in Figure 8, 9, respectively.
The NDWI and NDMI values calculated from the spectroradiometer are low in the forest-steppe region and relatively high in the steppe and Gobi region (Figure 8). In
Figure 8: Correlation between NDWI and NDMI calculated from spectroradiometric measurements of moisture and humus as a result of laboratory analysis.
Figure 7: Comparison of spectral measurement results of points representing the forest-steppe, steppe, and Gobi region.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \multirow{2}{*}{**Point number**} & \multicolumn{3}{c|}{**Spectroradiometer**} & \multicolumn{3}{c|}{**Laboratory analysis**} \\ \cline{2-7} & **NDWI** & **NDMI** & \multicolumn{1}{c|}{\begin{tabular}{c} **Red.** \\ **665 nm** \\ \end{tabular} } & \multicolumn{1}{c|}{\
\begin{tabular}{c} **NIR.** \\ **Motxture \%** \\ \end{tabular} } & \multicolumn{1}{c|}{
\begin{tabular}{c} **Humus** \\ **\%** \\ \end{tabular} } \\ \hline
**S181** & -0.377 & -0.411 & 0.110 & 0.172 & 8.41 & 7.690 & 9.380 \\ \hline
**S182** & -0.541 & -0.240 & 0.077 & 0.141 & 16.12 & 7.810 & 6.980 \\ \hline
**S183** & -0.539 & -0.273 & 0.105 & 0.188 & 10.87 & 7.750 & 2.620 \\ \hline
**S184** & -0.524 & 0.179 & 0.090 & 0.163 & 8.30 & 7.790 & 2.330 \\ \hline
**S185** & -0.524 & -0.207 & 0.081 & 0.148 & 15.72 & 7.800 & 3.350 \\ \hline
**S261** & -0.332 & -0.176 & 0.145 & 0.211 & 1.50 & 8.610 & 2.460 \\ \hline
**S262** & -0.387 & 0.178 & 0.150 & 0.222 & 1.74 & 8.540 & 2.440 \\ \hline
**S263** & -0.350 & -0.171 & 0.163 & 0.233 & 2.85 & 8.770 & 2.630 \\ \hline
**S264** & -0.346 & -0.137 & 0.163 & 0.227 & 1.34 & 8.720 & 2.140 \\ \hline
**S265** & -0.335 & -0.172 & 0.200 & 0.275 & 1.13 & 8.830 & 2.530 \\ \hline
**S240** & -0.220 & 0.175 & 0.330 & 0.385 & 1.34 & 9.750 & 0.810 \\ \hline
**S241** & -0.265 & -0.138 & 0.328 & 0.405 & 1.84 & 9.780 & 0.340 \\ \hline
**S242** & -0.344 & -0.179 & 0.204 & 0.260 & 1.90 & 8.790 & 0.910 \\ \hline
**S243** & -0.276 & 0.131 & 0.228 & 0.270 & 1.85 & 9.180 & 1.040 \\ \hline
**S245** & -0.306 & -0.138 & 0.156 & 0.195 & 1.87 & 9.180 & 1.240 \\ \hline \end{tabular}
\end{table}
Table 5: Comparison of spectral measurements and laboratory other words, the low values of NDWI and NDMI can indicate high levels of moisture and humans. However, the reaction environment (ptf) in the Gobi area is relatively high compared to the forest area. A high reflectance value in the red (665 nm) region indicates a high reaction medium. The red (665 nm) and near-violet (842 nm) light regions are the main spectral regions that represent soil moisture, and the low spectral reflectance of these regions indicates moist soil, while the high reflectance identifies dry soil (Figure 9). It can be seen that the soil of the forest-steppe region is relatively moist compared to the soil of the Gobi region. Comparison of spectral measurements and laboratory analysis shown in the Table 5.
### Results of determination of surface soil moisture distribution from Sentinel-2 MSI satellite data
Sentinel-2 satellite atmosphere treated L2A product was used to give channel 12 red, channel 8 green, and channel 3 blue color combination, green (560 nm) or channel 3 and near violet-red (842 nm) or channel 8 as a normalized water index NDWI, near violet (842 nm) or channel 8 and shortwave violet (1610 nm) or channel 11 as a moisture normalized index were calculated respectively the results are shown in Figure 10.
The results calculated from the Sentinel-2 satellite data and the results of determining the spatial distribution of the moisture content of the soil surface using the modelling shown in formula 7 are shown in Figure 11, and the graph comparing the moisture calculated from Sentinel-2 with laboratory results is shown in Figure 12 shown respectively. Areas with high humidity are highlighted in blue and are spatially distributed
## 4 Conclusion
The following conclusions are taken from the results of the research on surface soil moisture determination using Sentinel-2 satellite data and spectroradiometric measurements. It includes:
* The spectral reflectance measured during the field research is shown the high reflectance of the moist soil spectrum of the forest field and the low reflectance of the spectrum representing the Gobi region. It indicates the possibility of studying soil moisture by spectral measurement and analysis is useful?.
* The relationship between the spectral reflectance of the red (665 nm) and near-violet (842 nm) light regions shows the soil of the forest-steprep region is relatively moist compared to the Gobi region soil surface, and the Gobi soil is dry and the field soil is drier.
* A model developed from a linear relationship between field-measured spectroradiometric reflectance and laboratory-determined moisture provides a formula for routine monitoring of moisture content using NDWI estimated from satellite data.
Soil moisture (%)= -44.133*NDWI - 11.553
* NDWI calculated from spectroradiometer compared to moisture determined in the laboratory had a linear correlation of R \({}^{2}\) =0.74 or 74%, and moisture calculated from C entinel-2 showed a high agreement.
* In terms of space, the results calculated from satellite data show that the forest and stepe areas have relatively high humidity compared to the Gobi region, and the total area representing the Gobi region has lower moisture content. Based on these results, it is considered that it is possible to continuously determine the amount of moisture from satellite data using NDWI (surface soil moisture) in a certain space-time range and use it for soil control and monitoring.
## Acknowledgements
Field data acquisition was made feasible through the funds provided by project (grant:IIlyTx/BHXAY-2019/33) at the Institute of Geography and Geococology, Mongolian Academy of Sciences supported by the Mongolian Foundation for Science and Technology (MFST), Ministry of Education and Science (Mongolia).
## References
* [PERSON] et al. (1999) [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], 1999. Soil reflectance. In: [PERSON] (Ed.). _Remote Sensing for the Earth Sciences, Manual of Remote Sensing 3_, John Wiley & Sons, New York :111-188.
* [PERSON] and Hanks (1965) [PERSON], [PERSON] [PERSON], 1965. Reflection of radiant energy from soils. Soil Sci. 100:130-138.
* [PERSON] (1952) [PERSON], 1952. Atmospheric radiation and its reflection from the ground. J. Meteorol. 9: 41-51.
* [PERSON] et al. (2001) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2001. Near-infrared reflectance spectroscopy-principal components regression analysis of soil properties. _Soil Science Society of America Journal_ 65: 480-490. doi.org/10.2136/ssxaj2001.652480x.
* [PERSON] (2003) [PERSON], 2003. Classification of soil humans content.
* Meteorology Agency, Department of Climate and Environmental Analysis of Dundgovi Province. dundgovi.sagagar.gov.mn (13 July 2023).
* [PERSON] et al. (2002) [PERSON], [PERSON], [PERSON], & [PERSON], 2002. The potential of near-infrared reflectance spectroscopy for soil analysis--a case study from the Rivering Plain of south-eastern Australia. _Australian Journal of Experimental Agriculture_, _42_(5), 607-614. doi.org/10.1071/EA01172.
* ESA (2009) ESA. European Space Agency website: sentinels.coopernicus.eu (13 July 2023).
* [PERSON] (1996) [PERSON], 1996. NDWI--A normalized difference water index for remote sensing of vegetation liquid water from space. _Remote sensing of environment_, _58_(3), 257-266. doi.org/10.1016/S0034-4257(96)00067-3.
* [PERSON] et al. (2009) [PERSON] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON], 2009. Validation of soil moisture estimation by AMSR-E in the Mongolian Plateau. _Journal of the Remote Sensing Society of Japan_, 29(1), 271-281. doi.org/10.11440/rssj.29.271.
* Kheniti-TUOSHG (2023) Kheniti-TUOSHG. Department of Meteorological and Environmental Analysis of Kheniti province. kheniti.tsagagar.gov.mn (13 July 2023)
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], Meng, [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON], 2021. Assessing remotely sensed and reanalysis products in characterizing surface soil moisture in the Mongolian Plateau. _International Journal of Digital Earth_, 14(10), 1255-1272. doi.org/10.1080/17538947.2020.1820590.
* [PERSON] et al. (2006) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON], 2006. Mid-and near-infrared spectroscopic assessment of soil compositional parameters and structural indices in two ferallos. _Geoderma, 136_(1-2), 245-259. doi.org/10.1016/j.geoderma.2006.03.026.
* [PERSON] (1996) [PERSON], 1996. The use of the Normalized Difference Water Index (NDWI) in the delineation of open water features. _International journal of remote sensing_, _17_(7), 1425-1432. doi.org/10.1080/01431169608948714.
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], & [PERSON], 2021. Spatial distribution of soil moisture in Mongolia using SMAP and MODIS satellite data: a time series model (2010-205). _Remote Sensing_, 13(3), 347. doi.org/10.3390/rs13030347.
* NRCS Soil Survey Handbook (2017) NRCS Soil Survey Handbook. Soil Science Division Staff., 2017. Soil survey manual. [PERSON], [PERSON], and [PERSON] (eds.). _USDA Handbook_ 18. Government Printing Office, Washington, DC
* SNAP program available at: [[http://step.esa.int](http://step.esa.int)]([http://step.esa.int](http://step.esa.int)) (13 July 2023).
* [PERSON] et al. (2007) [PERSON], [PERSON], [PERSON], & [PERSON], 2007. NIR-red spectral space based new method for soil moisture monitoring. _Science in China Series D: Earth Sciences_, _50_(2), 283-289. doi.org/10.1007/s11430-007-2004-6.
|
isprs
|
RESULTS OF USING SPECTRORADIOMETERS FOR IN SOIL MOISTURE OF MONGOLIAN STEPPE ECOSYSTEM
|
N. Batsaikhan, O. Lkhamjav, M. Chimid-Ochir, M. Togtokh, G. Ulgiichimeg
|
https://doi.org/10.5194/isprs-archives-xlviii-1-w2-2023-391-2023
| 2,023
|
CC-BY
|
isprs/a973d4cb_6f80_46d5_b74a_4c27eb3fe970.md
|
Integrated bundle adjustment with variance component estimation - fusion of terrestrial laser scanner data, panoramic and central perspective image data
[PERSON]
Corresponding author[PERSON]
[PERSON] Maas
Institute of Photogrammetry and Remote Sensing, Dresden University of Technology,
Helmholtzstrasse 10, 01069 Dresden, Germany - (danilo.schneider, hans-gerd.maas)@tu-dresden.de
###### Abstract
Terrestrial laser scanners and digital cameras can be considered largely complementary in their properties. Several instruments combine a laser scanner and a camera, with the laserscanner providing geometry information and the camera supplying point of surface colour. These approaches of data fusion make sub-optimal use of the complementary properties of the two devices, as they assign a master-and-slave casting to laser scanner and camera. A thorough exploitation of the complementary characteristics of both types of sensors should start in 3D object coordinate determination with both devices mutually strengthening each other. For this purpose a bundle adjustment for the combined processing of terrestrial laser scanner data and central perspective or panoramic image data, based on an appropriate geometric model for each sensor, was developed. Since different types of observations have to be adjusted simultaneously, adequate weights have to be assigned to the measurements in a suitable stochastic model. For this purpose, a variance component estimation procedure was implemented, which allows to use the appropriate characteristics of the measurement data (e.g. lateral precision of image data, reliability of laser scanner range measurement), in order to determine 3D coordinates of object points. Finding optimal weights for the different groups of measurements leads to an improvement of the accuracy of 3D-coordinate determination. In addition, the integrated scanner and camera data processing scheme allows for the optimal calibration of the involved measurement devices (scanner+camera self-calibration). Moreover, it is possible to assess on the accuracy potential of the involved measurements. The presented paper describes the basic geometric models as well as the combined bundle adjustment with variance component estimation. First results, based on data in a 360\({}^{\circ}\) test field, are presented and analysed.
Bundle adjustment, variance component estimation, laser scanner, camera, panorama, calibration
## 1 Introduction
Several software packages nowadays provide the possibility of combined processing of terrestrial laser scanner data and photogrammetric image data, since the combination of three-dimensional point clouds and images presents promising prospects due to their complemental characteristics. For this reason manufactures of terrestrial laser scanners also integrate digital cameras in their scanning hardware ([PERSON] et. al., 2003; [PERSON] et. al., 2004). In these integrated systems, the laser scanner usually represents the dominant device, while the image information is only used secondarily for the colouring of point clouds, texturizing of surfaces or to support the interpretation in interactive laser scanner data handling. Beyond this, the use of images for the automatic registration of laser scanner datasets was suggested in previous approaches ([PERSON] & [PERSON], 2006; [PERSON] & [PERSON], 2006), as well as the automatic generation of orthophotes on the basis of image and range data ([PERSON], 2006).
The integrated analysis of terrestrial laser scanner data and photogrammetric image data provides a much larger potential ([PERSON] et. al., 2004; [PERSON] & [PERSON], 2006). Using the complementary characteristics of both sensor types consistently in a combined adjustment, laser scanner and camera may mutually benefit from each other in the determination of object geometry and in calibration ([PERSON] et. al., 2003).
In particular, high resolution cameras may be rather beneficial in a combined system, since the high angular accuracy of sub-pixel accuracy image measurements may help to improve the lateral accuracy of laser scanners. Adapting to the operating mode of most laser scanners, which cover a 360\({}^{\circ}\) field of view, the use of panoramic cameras may be an interesting alternative to conventional central perspective cameras. Panoramic cameras often have a very high resolution and a large accuracy potential for the determination of 3D object coordinates ([PERSON] & [PERSON], 2004; [PERSON] & [PERSON], 2005).
Based on the geometric models of laser scanner and camera, as well as a geometric model of panoramic cameras, which was developed at the Institute of Photogrammetry and Remote Sensing of the TU Dresden ([PERSON] & [PERSON], 2006), a combined bundle adjustment tool for the integrated processing of terrestrial laser scanner data, central perspective and panoramic image data was developed.
Since the procedure requires the simultaneous adjustment of different types of observations, it is necessary to assign adequate weights to the groups of measurements at the combined adjustment. These weights may be specified by the user, based on manufacturer specifications or practical experience. More rigorously, the weights can be determined automatically in the adjustment procedure by variance component estimation. Thus, the respective characteristics of the involved measurement devices will be optimally utilised, and an improvement of the adjustment results can be achieved ([PERSON], 2001; [PERSON], 2000). Results of variance component estimation in a combined adjustment of laser scanner and image date are also presented in ([PERSON] et. al., 2003).
In this paper the implementation of a combined bundle adjustment with variance component estimation is described and analysed on the basis of multiple laser scans, central perspective and panoramic images in a 360\({}^{\circ}\) test field at TU Dresden.
## 2 Geometric Models
One precondition for the combined analysis of measurements from different devices (laser scanner, camera, panoramic camera) is the knowledge about the basic geometric models as well as their mathematical description. This allows for the calculation of object information (e.g. coordinates of object points) using different observations (range, angles, image coordinates) on the one hand and for the calibration of the involved measurement devices on the other hand, if the geometric models are extended by an appropriate set of additional parameters.
### Central perspective and panoramic images
Cameras with area sensors comply with the known central perspective model (Figure 1). Mathematically this is described by the collinearity equations. Usually these equations are extended by correction terms, which contain additional parameters ([PERSON], 1971; [PERSON], 1986) to compensate errors caused by lens distortion and other effects.
Panoramic cameras are able to record a 360\({}^{\circ}\) horizontal field of view in one image, which is in particular beneficial for the recording of big interiors. Technically this is mostly realised by the rotation of a linear sensor. Panoramic cameras provide a very high resolution and accordingly a high accuracy potential. The panoramic camera model can be described by central perspective geometry only in one coordinate direction. The mapping process (Figure 2) can be represented by the projection onto a cylinder ([PERSON] & [PERSON], 2006; [PERSON], 2007).
The mathematical descriptions of the geometric models of central perspective and panoramic cameras (see [PERSON] & [PERSON], 2006 for the derivation) are:
\[\begin{split}& x^{\prime}=x_{0}{-}\frac{c\cdot x}{z}+dx^{\prime}\\ & y^{\prime}=y_{0}{-}\frac{c\cdot y}{z}+dy^{\prime}\\ & x^{\prime}{}_{pano}=x_{0}{{}^{+}}{-}c\cdot\arctan\!\left(\frac {-y}{x}\right){+}\,dx^{\prime}{}_{pano}\\ & y^{\prime}{}_{pano}=y_{0}{{}^{+}}{-}\frac{c\cdot z}{\sqrt{x^{2 }+y^{2}}}+dy^{\prime}{}_{pano}\end{split} \tag{1}\]
The transformation into a uniform coordinate system occurs by:
\[\begin{split}& x=r_{11}(X-X_{0})+r_{21}(Y-Y_{0})+r_{12}(Z-Z_{0})\\ & y=r_{12}(X-X_{0})+r_{22}(Y-Y_{0})+r_{22}(Z-Z_{0})\\ & z=r_{13}(X-X_{0})+r_{23}(Y-Y_{0})+r_{33}(Z-Z_{0})\end{split} \tag{3}\]
where \(c\) = principal distance \(x^{\prime}\), \(y^{\prime}\) = image coordinates \(x_{0}{{}^{+}}\), \(y_{0}{{}^{+}}\) = principal point coordinates \(x_{0}^{\prime}{{}^{+}}\), \(y_{0}{{}^{+}}\) = panoramic image coordinates \(X_{0}\), \(Y_{0}\), \(Z_{0}\) = coordinates of projection center \(X\), \(Y\), \(Z\) = coordinates of object points \(r_{ij}\) = elements of rotation matrix \(x\), \(y\), \(z\) = coordinates of object points in the local camera coordinate system
The correction terms \(dx^{\prime}\), \(dy^{\prime}\) as well as \(dx^{\prime}{}_{pano}\) and \(dy^{\prime}{}_{pano}\) contain additional parameters for the compensation of systematic errors, which are caused by the physical characteristics of the cameras.
### Laser scanner
Original measurement data of terrestrial laser scanners are spherical coordinates, i.e. range (\(D\)), horizontal (\(a\)) and vertical (\(\beta\)) angle. Therefore the geometric model can be described easily by the conversion of Cartesian into spherical coordinates (eq. 4). Applying equation (3), the local laser scanner coordinate system can be integrated into the uniform object coordinate system.
Figure 1: Central perspective camera model
Figure 3: Laser scanner basic model
Figure 2: Panoramic camera model
\[D =\sqrt{x^{2}+y^{2}+z^{2}}+dD \tag{4}\] \[\alpha =\arctan(\frac{y}{x})+d\alpha\] \[\beta =\arctan(\frac{z}{\sqrt{x^{2}+y^{2}}})+d\beta\]
Analogous to the camera model, additional parameters can be considered within the correction terms _dD_, _da_ and _d\(\beta\)_ as an extension of the geometric model of terrestrial laser scanners. This allows for the compensation of systematic deviations from the basic model and thus for the calibration of laser scanners.
However, the calibration of terrestrial laser scanners is complicated by the fact that the manufacturers already implement geometric corrections inside the scanner, whose underlying model equations are mostly not known. Subsequently, significant systematic effects can often not be detected in the residuals of the observations. Therefore only a distance offset (\(\mathrm{k_{0}}\)) and scale (\(\mathrm{k_{S}}\)) parameter were used in the geometric model (eq. 5) so far, but no corrections of the horizontal and vertical angle were considered.
\[dD=k_{S}\cdot D+k_{0} \tag{5}\]
## 3 Integrated bundle adjustment
Bundle adjustment allows for the orientation of an arbitrary number of images, using the image coordinates of object points as observations. The results of the calculation are the orientation parameters of the images, the 3D coordinates of object points and possibly camera self-calibration parameters. Extending this approach to the combined bundle adjustment means the integration of all laser scans, central perspective and panoramic images of each involved measurement device (scanner, camera, panoramic camera). The calculation follows the geometric constraint that all corresponding rays between object point and the instrument should intersect in their corresponding object point.
The spherical coordinates of object points measured with a laser scanner as well as the image coordinates of a camera respectively a panoramic camera are introduced as observations in one combined coefficient matrix. Figure 4 shows a synthetic example of the structure of a design matrix.
The calculation is performed as a least squares adjustment. The results are the coordinates of object points, the position and orientation of each involved scan and image, the calibration parameters of the measurement devices as well as statistical values for the assessment of accuracies and correlations.
For the calculation of the bundle adjustment a software was developed at the Institute of Photogrammetry and Remote Sensing of TU Dresden, which also allows exporting a protocol and a visualisation file. All settings are displayed in a graphical user interface (Figure 5) and can be changed if necessary. In order to detect and to eliminate outliers a data-snooping procedure following ([PERSON], 1968) is applied.
Within a courtyard at TU Dresden a 360\({}^{\circ}\) test field with ca. 100 retroreflective targets (circles with 5 cm diameter) was installed to practically verify the combined bundle adjustment. The dimensions of this courtyard are 45 m \(\times\) 45 m, the surrounding facades are 20 m high. The scanner used in the practical tests was a Riegl LMS-Z420i, whose operating software allows for the automatic determination of the centre of retroreflective targets applying a centroid operator to the intensity image. Furthermore multiple panoramas were captured with the KST Eyscean M3 metric panoramic camera (Schneider & Maas, 2006), as well as a large number of images from digital SLR cameras Kodak DCS 14n and Nikon D100. The target image coordinates were determined using centroid and ellipse operators. In the following, the results of processing the data of several different sensor combinations in the test field will be shown.
### Example 1
This example shows the calculation of the 3D coordinates of 10 object points of a facade of the test field. Two laser scanner positions and two panoramic camera positions were stepwise introduced into the combined bundle adjustment in different constellations, and the standard deviation of the estimated object coordinates were analysed. Figure 6 shows the used configuration schematically.
Figure 4: Structure of design matrix (example)
Figure 5: User interface of combined adjustment
Using only 2 panoramic images for the bundle adjustment, the precision of the resulting object coordinates (mainly in imaging direction Y) is worse than the precision obtained from one laser scan (see Table 1). This can be explained by poor intersection geometry of the used panorama positions. Furthermore the potential of the high-resolution panoramic camera could not be exploited, since the retroreflective targets could not be illuminated properly and the subpixel potential of the image analysis operators could not be used to full extent. Nevertheless, the combination of both devices (at least one scan and one panoramic image) leads to a significant precision improvement.
While the laser scanner measurements improve the accuracy in depth direction, the image observations of the panoramic camera ensure a better precision in lateral coordinate direction. If further scans or images are added, the RMS of the standard deviations of object point coordinates can be minimized accordingly, as long as good intersection angles are maintained.
### Example 2
The next example analyses the precision improvement achieved by the use of additional central perspective images. For this purpose 4 laser scans, 5 panoramic images and a total of 62 images with the Kodak DSC 14n were recorded (Figure 7 shows a reduced number of camera positions). The recording configuration was chosen with regard to good intersection geometry. Furthermore additional images with a camera Nikon D100, which was mounted on top of the [PERSON] laser scanner, were captured and included into the calculation. Figure 8 shows the devices involved into this calculation example.
The results of this example show, that the integration of additional panoramic or central perspective images has the potential to improve the accuracy of the calculated results in general. This can be realized in practice, if the user takes additional images while the laser scan runs automatically, subsequently feeding the images into the calculation process. Similarly the images of a camera mounted on top of a laser scanner respectively a camera integrated within the laser scanner hardware can contribute to increase the accuracy. The large number of additional images of the last calculation example in Table 2 may be unrealistic for practical use, but shows the accuracy potential of the combined bundle adjustment.
## 4 Variance component estimation
The combined bundle adjustment uses different types of observations simultaneously in order to estimate the unknown parameters. For this reason it is necessary to assign suitable weights to the different groups of observations (image coordinates in central perspective and panoramic images, range measurement and angle measurements of the laser scanner). The definition of weights can be performed in terms of fix values, in case of known a-priori standard deviations of the measurements (e.g. specifications of the manufacturer) or if experience values are available. However, the information content of the observations is not fully exploited in this case.
\begin{table}
\begin{tabular}{c c|c c c|c c c} \hline \hline \multirow{2}{*}{_Number of scans_} & \multicolumn{2}{c|}{_Number_} & \multirow{2}{*}{\(\hat{\triangle}_{0}\)} & \multirow{2}{*}{\(\hat{\triangle}_{d,d}\)} & \multirow{2}{*}{\(\hat{\triangle}_{\sigma,\varphi,\varphi}\)} & \multirow{2}{*}{_RMS_} & \multirow{2}{*}{_RMS_} & \multirow{2}{*}{_RMS_} & \multirow{2}{*}{_RMS_} \\ & & & & & & & \\ \hline
1 & – & 7.45 & 4.92 & – & 2.82 & 6.83 & 3.36 \\
1 & – & 5.56 & 4.91 & – & 2.55 & 5.22 & 2.93 \\ – & 2 & – & – & 0.55 & 4.18 & 14.15 & 4.99 \\
1 & 1 & 5.56 & 4.85 & 0.59 & 2.25 & 5.07 & 2.61 \\
1 & 2 & 5.84 & 4.88 & 0.62 & 1.96 & 4.81 & 2.51 \\
2 & – & 6.87 & 6.30 & – & 2.53 & 4.83 & 2.95 \\
2 & 1 & 6.48 & 5.65 & 0.68 & 2.12 & 4.47 & 2.51 \\ \hline
2 & 2 & 6.21 & 5.42 & 0.65 & 1.91 & 4.33 & 1.88 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Example 1: Calculation results of different configurations (calculated with variance component estimation – see chapter 4)
Figure 8: Combined devices (Riegel laser scanner LMS-Z420i, panoramic camera Eyescan M3, Kodak 14n, Nikon D100)
Figure 6: Imaging configuration 1 (schema)
Figure 7: Imaging configuration 2 (schema)
Using the variance component estimation procedure (VCE) it is possible to estimate optimal weights for each group of observations as well as standard deviations of the observations in the course of the bundle adjustment. This allows for the qualification of each group of measurement on the one hand and for an improvement of the adjustment results on the other hand, since the individual characteristics of the involved measurement devices can be optimally utilised ([PERSON], 2001; [PERSON], 2000). By separating the horizontal and vertical angle measurement of the laser scanner as well as the horizontal and vertical image coordinates of the panoramic camera into different groups of observation, it becomes possible to draw conclusions on the characteristics of each instrument. Furthermore, also cameras or laser scanners with different accuracies can be considered simultaneously.
The weights \(p_{i}\) of the observations are determined by the ratio of the variance of the unit weight \(\sigma_{o}^{2}\) and the variance of the observations \(\sigma_{i}^{2}\), which can be derived from manufacturer's data or from empirical values. A constant value will be set for \(\sigma_{o}\) (e.g. 0.01 in the presented examples). Subsequently, the standard deviation of unit weight \(\hat{\alpha}_{0}\) shows if the a-priori standard deviations of the observations were defined too pessimistic (\(\hat{\alpha}_{0}\)\(\sim\)\(\alpha_{0}\)) or too optimistic (\(\hat{\alpha}_{0}\)\(>\)\(\alpha_{0}\)).
\begin{table}
\begin{tabular}{c c c c c c c c c c c} \hline \hline _Standard deviation_ & \multicolumn{3}{c|}{_too optimistic/_} & \multicolumn{3}{c|}{_Unbalanced weights_} & \multicolumn{3}{c}{_Variance component estimation_} \\ _of observations_ & \multicolumn{3}{c|}{_too pessimistic_} & \multicolumn{3}{c|}{_Unbalanced weights_} & \multicolumn{3}{c}{_Variance component estimation_} \\ \cline{3-11} \multicolumn{1}{c|}{} & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\ \hline Range & (10.0) & (3.0) & (5.3) & **(2.0)** & (10.0) & (10.0) & (10.0) & (7.5) & (7.5) & (7.5) & (7.5) & (7.5) \\ (mm) & 5.73 & 6.89 & 5.54 & **2.0** & 14.8 & 13.2 & 10.6 & 5.23 & 5.21 & 5.24 & 5.22 & 5.23 \\ \hline Horizontal angle & \multicolumn{3}{c|}{(10.0)} & (2.0) & (5.6) & (10.0) & **(2.0)** & (10.0) & (10.0) & (10.0) & (10.0) & \(4.23\) & (10.0) & (10.0) \\ Vertical angle & \multicolumn{3}{c|}{5.73} & 4.59 & 5.86 & 13.1 & **3.0** & 13.2 & 10.6 & 5.57 & (10.0) & 5.58 & (10.0) & (10.0) \\ (mm) & \multicolumn{3}{c|}{(200)} & & & & & & & 6.64 & 6.64 & 6.64 \\ \hline Panoramic \(\mathbf{x}_{y}\)’ & \multicolumn{3}{c|}{(0.5)} & (0.5) & (0.5) & (0.5) & (1.00) & **(0.25)** & (1.00) & (0.5) & (0.5) & (0.5) & (0.5) \\ (pixel) & \multicolumn{3}{c|}{0.57} & 0.66 & 0.63 & 1.31 & 1.48 & **0.38** & 1.06 & 0.60 & 0.60 & 0.66 & 0.66 & 0.66 \\ \hline Central & \multicolumn{3}{c|}{(0.5)} & (0.12) & (0.24) & (0.5) & (0.5) & (0.5) & **(0.12)** & (0.2) & (0.2) & (0.2) & (0.2) & (0.2) \\ Central & \multicolumn{3}{c|}{0.29} & 0.29 & 0.25 & 0.65 & 0.74 & 0.66 & **0.13** & 0.24 & 0.24 & 0.24 & 0.24 & 0.23 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Combined bundle adjustment with different stochastic models (in brackets: standard deviation for the a-priori definition of observation weights; hereunder: estimated a-priori standard deviations of observations)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline _Calculation_ & \multicolumn{3}{c|}{_Weighting_} & \multicolumn{3}{c|}{\(\hat{\alpha}_{0}\)} & \multicolumn{3}{c|}{\(RMS_{X}\)} & \multicolumn{3}{c|}{\(RMS_{Y}\)} & \multicolumn{3}{c|}{\(RMS_{Z}\)} & \multicolumn{3}{c}{\(RMS_{Z}\)} \\ _example_ & & & & _(mm)_ & _(mm)_ & _(mm)_ & _(mm)_ \\ \hline
1 & Balanced (but too pessimistic overall) & 0.00573 & 1.58 & 1.71 & 1.16 & 2.60 \\
2 & Balanced (but too optimistic overall) & 0.02296 & 1.63 & 1.83 & 1.15 & 2.71 \\
3 & Balanced and realistic constant weights & 0.01046 & 1.59 & 1.74 & 1.19 & 2.64 \\ \hline
4 & Unbalanced (range too optimistic) & 0.01309 & 1.78 & 1.78 & 2.03 & 3.23 \\
5 & Unbalanced (angles too optimistic) & 0.01479 & 2.03 & 2.62 & 1.33 & 3.57 \\
6 & Unbalanced (panoramic coordinates too optimistic) & 0.01320 & 2.12 & 2.31 & 1.32 & 3.40 \\
7 & Unbalanced (central perspective coordinates too optimistic) & 0.0161 & 2.68 & 3.05 & 1.94 & 4.50 \\ \hline
8 & VCE, 4 groups: D & \(\alpha_{0}\) & \(\beta_{0}\) & \(\chi^{\prime}\), \(y^{\prime}\) & \(\chi^{\prime}\), \(y^{\prime}\) & 0.01000 & 1.51 & 1.64 & 1.13 & 2.50 \\
9 & VCE, 5 groups: D & \(\alpha_{0}\) & \(\beta_{0}\) & \(\chi^{\prime}\), \(y^{\prime}\) & \(\chi^{\prime}\), \(y^{\prime}\) & 0.01000 & 1.56 & 1.68 & 1.04 & 2.52 \\
10 & VCE, 5 groups: D & \(\alpha_{0}\) & \(\beta_{0}\) & \(\chi^{\prime}\) & \(y^{\prime}\) & \(\chi^{\prime}\), \(y^{\prime}\) & 0.01000 & 1.47 & 1.60 & 1.16 & 2.46 \\
11 & VCE, 6 groups: D & \(\alpha_{0}\) & \(\beta_{0}\) & \(\chi^{\prime}\) & \(y^{\prime}\) & \(\chi^{\prime}\), \(y^{\prime}\) & 0.01000 & 1.52 & 1.63 & 1.05 & 2.46 \\
12 & VCE, 7 groups: D & \(\alpha_{0}\) & \(\beta_{0}\) & \(\chi^{\prime}\) & \(y^{\prime}\) & \(\chi^{\prime}\) & \(y^{\prime}\) & 0.01000 & 1.52 & 1.63 & 1.05 & 2.46 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Example 2: Calculation results of different configurations (calculated with variance component estimation – see chapter 4)if observations of the same type have to be processed, the variance-covariance matrix \(\Sigma\) is calculated as product of \(\sigma_{0}\)2 and the cofactor matrix \(\mathsf{Q}\). In case of a combined adjustment of different observation groups the matrix \(\Sigma\) will be split into components \(\Sigma_{i}\)=\(\sigma_{i}^{2}\mathsf{Q}\). The factors \(\sigma_{i}^{2}\) are the variance components to be estimated which represent the a-priori measurement inaccuracies of each observation group. The calculation is carried out as described in ([PERSON], 1997; [PERSON], 2000).
Table 3 and 4 show the results of 12 different practical examples. The weighting of examples 1 and 2 was balanced but too pessimistic respectively too optimistic. For example 3 well balanced and realistic weights were used as constant values. Examples 4-7 started with unfavourable unbalanced observation weights, examples 8-12 were calculated with integrated variance component estimation, each with different constellations (compare table 4) of observation groups.
Generally it is noticeable that the variance component estimation has the potential to contribute to the improvement of the accuracy, in particular, if the precision of the involved instruments is not sufficiently well known a-priori (see table 3). Table 4 demonstrates the capability of the calculation with variance component estimation to estimate the precision of the involved groups of measurements - widely independent from the definition of a-priori approximate weights.
The values in brackets served for the definition of weights for each observation. The values below are the estimated a-priori standard deviations as results of the bundle adjustment. This value is better than the value in brackets if the weighting was too pessimistic and worse if the weighting was too optimistic. This is in particular noticeable with example 1 and 2. In Examples 4-7 only one group of observations started with too optimistic standard deviations which lead to overemphasized weights for this group of observations (in table 4 highlighted with boldface). Anyway, the adjustment results change for the worse inheses cases (see RMS of object coordinates in table 3). The variance component estimation (examples 8-12) results in balanced weights and therefore in optimal adjustment outcomes. The values in brackets in table 4 serve in these cases only for the definition of a-priori approximate weights. The values below are the variance components estimated within the adjustment with VCE. These variance components give realistic information about the precision of each observation group.
Furthermore, it is even possible to draw conclusions on differences of the horizontal and vertical angle precision of the laser scanner, as well as on differences in the horizontal and vertical image coordinate accuracy, in particular for panoramic cameras. In future the separation into more observation groups will be analysed (e.g. by use of different cameras or scans with different resolution, separation in constant and distance-dependent variance components). In addition, the implementation and assessment of a free net adjustment (without datum points) with variance component estimation is planned. In order to assess the accuracy more realistically, independent test measurements respectively a comparison of the estimated object point coordinates with known object coordinates, measured with a higher accuracy, will be performed.
## References
* [PERSON] & [PERSON] (2006) [PERSON] & [PERSON], 2006: Automatic registration of terrestrial laserscanner data via imagery. _ISPRS Archives_. Vol. XXXVI, Part 5.
* [PERSON] (2007) [PERSON] [PERSON], 2007: Sensor modelling, terrestrial panoramic camera calibration and close-range photogrammetric network analysis. _Discertaint ETH Zurich_.
* [PERSON] (1968) [PERSON], 1968: A testing procedure for use in geodetic networks. _Netherlands Geodetic Commission_, Vol. 2 (5), Delft.
* [PERSON] (1971) [PERSON], 1971: Close-Range Camera Calibration. Photogrammetric Engineering, Vol. 37, No. 8.
* [PERSON] (2006) [PERSON] & [PERSON], 2006: Registration of terrestrial laser scanning data using planar patches and image data. _ISPRS Archives_. Vol. XXXVI, Part 5.
* [PERSON] (1986) [PERSON], 1986: Real-Time Image Meteorology with CCD cameras. Photogrammetric Engineering and Remote Sensing, Vol. 52, No. 11, pp. 1757-1766
* [PERSON] et al. (2003) [PERSON]; [PERSON]; [PERSON], 2003: Modellierung terrestrischer Laserscanner-Daten am Beispiel der Marc-Anton-Plastk. _Osterreichische Zeitschrift fur Fersemseung und Geoinformation (YGI)_, 91. Jahrgang (2003), 4: 288-296.
* acquisition techniques complementing one another. _ISPRS Archives_. Vol. XXXV, Part B5.
* [PERSON] (2001) [PERSON], 2001: Untersuchungen zur Feldprufung geodatischer Instrumente mitt Varianzkomponentenschaitang. _Diplomarbeit, Technische Universitat Darmstadt_, unpublished.
* [PERSON] (1997) [PERSON], 1997: _Parameterschatzung und Hypothesentests in linearen Modellen_. Dummler Verlag, Bonn, 3. Auflage.
* [PERSON] & [PERSON] (2004) [PERSON] & [PERSON], 2004: 3-D object reconstruction from multiple-station panorama imagery. _ISPRS Archives_. Vol. XXXIV, V/16.
* [PERSON] et al. (2004) [PERSON]; [PERSON]; [PERSON] [PERSON]; [PERSON] [PERSON], 2004: Untersuchungen zur Genauigkeit eines integrierten terrestrischen Laserscanner-Kamersystems. _Beitrage der Oldenburger 3D-Tage 2004_: 108-113, Herbert Wichmann Verlag, Heidelberg.
* [PERSON] (2006) [PERSON], 2006: Combination of distance data with high resolution images. _Image Engineering and Vision Metrology, ISPRS Archives_. Vol. XXXVI, Part 5.
* [PERSON] & [PERSON] (2005) [PERSON] & [PERSON], 2005: Combined bundle adjustment of panoramic and central perspective images. _ISPRS Archives_. Vol. XXXVI, Part 5/W8.
* [PERSON] & [PERSON] (2006) [PERSON] & [PERSON], 2006: A geometric model for linear-array-based terrestrial panoramic cameras. _The Photogrammetric Record_, 21(115): 198-210, Blackwell Publishing Ltd., Oxford, UK.
* [PERSON] & [PERSON] (2000) [PERSON] & [PERSON], 2000: Varianzkomponentenschaitzung in ingenierungeodatischen Netzen; Teil 1: Theorie. _Allgemeine Vermessungsnachrichten_, 3/2000: 82-90.
* [PERSON] et al. (2003) [PERSON]; [PERSON]; [PERSON] [PERSON], 2003: Using hybrid multi-station adjustment for an integrated camera laser-scanner system. _Optical 3-D Measurement Techniques VI_, Vol. 1 (2003), 298-305.
* [PERSON] & [PERSON] (2006) [PERSON] & [PERSON] [PERSON], 2006: Simultaneous orientation of brightness, range and intensity images. _ISPRS Archives_. Vol. XXXVI, Part 5.
|
isprs
|
Fusion of a panoramic camera and 2D laser scanner data for constrained bundle adjustment in GPS-denied environments
|
Yun Shi, Shunping Ji, Xiaowei Shao, Peng Yang, Wenbin Wu, Zhongchao Shi, Ryosuke Shibasaki
|
https://doi.org/10.1016/j.imavis.2015.06.002
| 2,015
|
CC-BY
|
isprs/3a250d7d_7ffc_41da_8527_27c761178147.md
|
# Geospatial approach to wetland vulnerability assessment for Northwest Bangladesh
A.B.M. [PERSON]
[PERSON]
[PERSON]
Department of Geoinformation, Faculty of Built Environment and Surveying, Universiti Teknologi Malaysia, Johor Bahru, Malaysia. [EMAIL_ADDRESS], (imzan, abdwahid)@utm.my.
###### Abstract
Despite their inestimable environmental significance, wetlands around the world have been disappearing in an alarming way. In the floodplain region like Northwest Bangladesh (NWBD), wetlands are more exposed to numerous pressure factors which have made them susceptible. Assessment of these threats is essential to understand the state of the wetland ecosystem and to develop a suitable management strategy. Exploring different Landsat images from 1989 to 2020, the present study has tried to comprehend the physical vulnerability of the wetlands of NWBD. Remote sensing approach of the Modified Normalized Difference Water Index (MNDWI) has been employed to delineate the areal extent and to evaluate the temporal changes of these wetlands in NWBD. The retrieved results have been drawn on to develop the vulnerability index using different statistical manners. Result reveals that between 1989 and 2020 NWBD has lost 57.89% of its total wetland areas which is around 970.34 km2. Retrieved results provide a clear trail of individual spatial and temporal wetland area coverage fluctuations in NWBD. Moreover, the decreasing trends of indicators imply a very unsustainable hydrological condition in NWBD which could be a big threat to the existence of its wetlands in near future. With some area-specific adjustments, this simple method of vulnerability assessment would help the policymakers to conserve, manage, and restore the wetlands in NWBD and other areas as well.
Wedland Vulnerability Assessment, Northwest Bangladesh.
## 1 Introduction
Wetlands are considered amongst the earth's most productive ecosystems which provides a very diverse array of decisive ecological functions and values, whilst it covers only 6% of the earth's land surface. However, they are also the most fragile and adaptive systems and susceptible to numerous external stresses([PERSON] et al., 2000). ([PERSON] et al., 2000). About half of the global wetland losses could be attributed to human activities and natural disaster ([PERSON], [PERSON], [PERSON], [PERSON], & [PERSON], 2021). With the incessant development and indiscriminate exploitation of natural resources wetland systems have been losing their existence. Wetlands, situated in the developing countries where growth of population is very high, and wetlands' protection is often overlooked by its citizen and the concern authorities have been found more pretenitious ([PERSON] & [PERSON], 2017). NWBD with its vast floodplain lands once was far-famed for its numerous wetlands. However, different external human-induced local stresses including agricultural and built-up lands encroachment onto wetlands, closure of tie channels linking wetlands to rivers, closure of outlet channels, and the excavation of new drainage routes, constructions of road, bridges and culverts, dams etc., ground water extraction have all contributed to the degradation of wetlands in NWBD ([PERSON] & [PERSON], 2019). Moreover, climate induced phenomenon like changes in precipitation and temperature also have aggravated this situation in the region. [PERSON] et al. (2021) have explored that during the time of their study from 1988 to 2018, around 2058.59 km\({}^{2}\) wetland areas have disappeared from the parts of NWBD. In their study [PERSON], [PERSON], [PERSON], and [PERSON] (2013) indicated a partial loss of 341.54 km\({}^{2}\) of wetland during 1989 and 2010. Therefore, to minimize the future degradation, it is imperative to quantify the reduction rate of wetlands' areas situated in the NWBD. Remote sensing data at multiple scale combined with geographical information system techniques have brought about a revolution to foster more proactive and effective approaches to resource particularly wetlands management ([PERSON] et al., 2010). Using the synoptic and moderate to high resolution different satellite imagers, scientist now a days tend to explore micro-level wetland mapping.
Maps of vulnerability can be employed to formulate priority-based policies pertaining to wetlands. Vulnerability of wetlands is determined by its relationship between the exposed and risk of a particular event, the impact of the event on the wetlands, and their ability to deal with the impacts or minimize the impact of the event. Magnitude of vulnerability and its nature highly vary in different spatio-temporal scale ([PERSON], [PERSON], & [PERSON], 2011; [PERSON] & [PERSON], 2019). The components, therefore, diverge and require minor to significant adjustments with the change of geographical areas. A landscape-scale geospatial assessment of wetlands in Wyoming has been carried out by [PERSON] et al. (2010). When physical vulnerability of the wetlands of Atrege river basin has been explored by [PERSON] and [PERSON] (2019) using logistic regression, and fuzzy logic approach for both pre and post dam periods. [PERSON], [PERSON], and [PERSON] (2011) have developed a methodology using three wetland sites in Nepal which provides a structure to data collection and analysis and guide the users to produce a vulnerability assessment of wetlands. However, a model to evaluate the ecological environment vulnerability of the Jixi National Wetland Park using remote sensing image, digital elevation model, and environmental quality interpolation process was established by [PERSON] et al. (2021). [PERSON] and [PERSON] (2017) developed a systematic methodology for assessing wetland vulnerability in a social-ecological approach applying broad-scale ecosystem services and vulnerability functions combining the hydro-geomorphic approach with DPSIR analysis. Most of these works tried to address the wetland vulnerability in terms of ecosystem health, water qualities, exotic species invasion, wastewater disposal etc.
([PERSON], 2009; [PERSON] and [PERSON], 2019). However, the physical vulnerability assessment of wetland in a simple but effective manner has found scarce. Inter-annual variation for surface water extent of wetlands indicates the hydrological variability when landscape indicators are used to accomplish change analysis. These vulnerability assessments provide better understanding of wetland condition where spatial wetland extent is highly dynamic and numerous pressure factors are prevailing. Human and climate induced physical vulnerability of the wetlands in NWBD has been found very persuasive. Therefore, considering the needs and challenges present study contemplates on assessing the physical vulnerability of wetlands in NWBD in a simple but effective geospatial approach.
## 2 Study Area
The northwest Bangladesh comprises with two divisions namely: Rajshahi and Rangur. It has an area of 34, 513 km2 having a total population of 29,992,955 (2011 census). The region is a part of Ganges-Brahmaputa rivers system- a flat alluvial basin which is completely separated by these river systems from the rest parts of the country. More than hundreds of rivers including tributaries and distributaries flow through the northwest region constituting a network of interlinked waterways. The region elongates latitudinally between 230 48\({}^{\prime}\) and 26 37\({}^{\prime}\) N, and longitudinally between 8890\({}^{\prime}\)1 and 89948 E ([PERSON] 1991). It has a vast area of flat land having several depressions which created a vast number of natural freshwater wetlands in this region. The region has got the tropical monsoon climate characterized by wide seasonal variation in rainfall (1200 mm-5000 mm), high temperature (ranges from 70C-40\({}^{\prime}\)C) and humidity (45%-92%). This diversified physiography along with climate extremes have made this region distinctive compared to the other regions of the country. Figure 1 illustrate the study area.
## 3 Materials and Methods
### Materials
Multispectral bands in Landsat (TM, ETM+, and OLI) imageries brings about the enhanced aptitudes to derive Land Cover (LC) information from the imageries ([PERSON] et al., 2020). Moreover, its broader swath (185 km), higher spatial and temporal (30m, and 16 days respectively) resolution, and free availability of long-term historical data have made Landsat data popular among the researchers ([PERSON] and [PERSON], 2019). Several studies have successfully used remote sensing Landsat satellite data for wetland change detection([PERSON] and [PERSON], 2021; [PERSON] et al., 2020), wetland vulnerability assessment ([PERSON] et al., 2018; [PERSON], [PERSON], [PERSON], & [PERSON], 2018; [PERSON] and [PERSON], 2019; [PERSON] et al., 2022; [PERSON] et al., 2021).Therefore, considering all these through literature review, Landsat series of different sensor data (TM, ETM+, and OLI) have been used in this study. All these images have been obtained from the United States Geological Survey (USGS)'s official website [[https://glovis.usgs.gov](https://glovis.usgs.gov)]([https://glovis.usgs.gov](https://glovis.usgs.gov)). The limitation of these image data is that these images were not acquired on the same date. However, near date of February and March (as alternative) month for each year has been used considering the minimum cloud cover and maximum availability of long-term historical data. The detailed specification of the employed images has been described in Table 1.
### Data pre-processing and processing
Selected data were reprojected to Universal Traverse Mercator Coordinate system zone 43N, using GCS WGS_1984 datum. To improve the interpretability and qualify of the selected satellite data, radiometric calibration and atmospheric correction using the Fast Line-of-sight Atmospheric Analysis of Spectral Hypercubes (FLAASH) atmospheric correction module has been performed using Envi 5.3 software to convert digital numbers (DN) to reflectance and to filter the interference from the path radiation, such as aerosol, dust particles, and water vapor.
### Identifying wetland areas
Over the past few years, several spectral index-based approaches have been developed and successfully employed to delineating surface water features. Among them, substituting the middle wave infrared (MIR)'s short wave infrared (SWIR1) band for near-infrared (NIR) [PERSON] (2006) proposed MNDWI which has been perceived prevalent among the scientist as it can enhance open water features while efficiently suppressing and even removing built-up land noise as well as vegetation and soil noise. MNDWI can be expresses as follows Eq(1):
\begin{table}
\begin{tabular}{c c c c} \hline \hline _Sensor Type_ & _Path_ & _Row_ & _Aequation_ & _Spatial_ \\ & & & _Date_ & _Resolution_ \\ \hline \multirow{4}{*}{Landstat TM5} & \multirow{2}{*}{138} & \multirow{2}{*}{42,43} & 24/02\({}^{\prime}\)95 & \multirow{2}{*}{30m} \\ & & & 06/02\({}^{\prime}\)10 & \\ \cline{1-1} \cline{3-4} & & & 23/02\({}^{\prime}\)89 & \\ \cline{1-1} \cline{3-4} & & & 13/02\({}^{\prime}\)10 & \\ \hline \multirow{4}{*}{Landsat ETM’7} & \multirow{2}{*}{138} & \multirow{2}{*}{42,43} & 19/02\({}^{\prime}\)00 & \multirow{2}{*}{30m} \\ & & & 26/02\({}^{\prime}\)00 & \\ \cline{1-1} \cline{3-4} & & 43 & 29/03\({}^{\prime}\)00 & \\ \hline \multirow{4}{*}{Landstat OLI} & \multirow{2}{*}{139} & \multirow{2}{*}{42,43} & 04/02\({}^{\prime}\)15 & \multirow{2}{*}{30m} \\ & & & 02/02\({}^{\prime}\)20 & \\ \cline{1-1} \cline{3-4} & & & 11/02\({}^{\prime}\)15 & \\ \cline{1-1} \cline{3-4} & & & 09/02\({}^{\prime}\)20 & \\ \hline \end{tabular}
\end{table}
Table 1: Details of used Landsat data.
Figure 1: Northwest Bangladesh
## 4 Result and Discussion
### Wetland change evaluation
Spatiotemporal wetland maps of the Northwest region of Bangladesh for the years of 1989, 2000, 2005, 2010, 2015, and 2020 were produced from the Landsat series of satellite images for the comprehensive evaluation of its spatiotemporal changes. Table3 illustrates the spatiotemporal wetlands areal coverage of the Northwest region of Bangladesh from 1989 to 2020 in selected time spans. The total area of the Northwest region estimated from the Landsat images is about 34437.39 km\({}^{2}\). It is observed that overall wetland areas have been decreasing during the whole study time. In 1989, initially the total wetland area was 1676.12 km\({}^{2}\) occupying 4.87% of the total area of the region, which was the highest during this study period. Gradually, it starts declining and continues at a disquieting rate, though the rate of loss was very inconsistent throughout the study time. In the final state in 2020, it is seen that the declining wetland area has dropped down to 705.78 km\({}^{2}\), which was 2.05% of the total study area. Figure 2 portrays the decline trends of wetlands area from 1989 to 2020 in Northwest region of Bangladesh. The R\({}^{2}\) value of 0.9515 indicates a very significant declining correlation of wetlands with the time.
In the initial state in 1989, wetlands were well distributed among all the five districts of the study area. High concentric distribution of wetlands has been found in the northeast and the middle part of the study area, befalling in the district of Rangpur, Bogura, and Rajshahi, respectively. However, throughout the study area, an inconsistent wetland area loss has been observed on the following years after 1989. In 2000, wetland areas were seen to increase in the Dinajpur districts, however, in Bogura and Rajshahi districts, the areas were found to be decreased.
In 2005, enormous decrease of wetlands areas had been noticed in Rangpur and Rajshahi districts. After 2005, massive decrease of wetlands has been perceived in all most all the districts and all the following years. Yet some areas revealed their indiscriminate increase of wetlands, especially in 2015 and 2020. It might be mentioned that man-made artificial small and medium size ponds/tanks for fish culture contribute a lot to rise the total wetland areal coverage in some districts in the region. However, due to the spatial resolution of the Landsat imageries, all these micro level expansions or changes of wetland areas cannot be evaluated very accurately. Figure 3 delineates the details of spatiotemporal wetland distribution and table 4 explains the wetland change scenario in NWBD. Previous several studies indicate that different anthropogenic factors like huge population growth, extensive expansion agriculture, economic and social developments have contributed to these changes of wetland areas ([PERSON] et al., 2022; [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON], 2022; [PERSON] et al., 2021). Table 5 depicts the district wise temporal wetland distribution in NWBD.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline
**Accuracy** & **1989** & **2000** & **2005** & **2010** & **2015** & **2020** \\
**Parameter** & **1989** & **2000** & **2005** & **2010** & **2015** & **2020** \\ \hline
**Overall** & **87.5** & 90.18 & 90.18 & 92.04 & 89.38 & 89.61 \\
**Accuracy** & & & & & & \\ \hline
**User’s Accuracy** & & & & & & \\ \hline
**Wetlands** & 81.58 & 94.59 & 96.00 & 88.10 & 84.78 & 91.67 \\
**Landmass** & 83.33 & 93.75 & 82.69 & 95.29 & 97.06 & 90.00 \\
**Producer’s** & & & & & & \\ \hline
**Accuracy** & & & & & & \\ \hline
**Wetlands** & 88.57 & 97.22 & 85.71 & 92.50 & 97.50 & 89.19 \\
**Landmass** & 88.89 & 97.30 & 87.18 & 86.84 & 93.55 & 87.80 \\
**Kappa** & & & & & & \\ \hline
**Coefficient** & & & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Accuracy chart for image classification.
\begin{table}
\begin{tabular}{l c c} \hline \hline Year & Wetlands Area (KmZ) & \% Of Total Area \\ \hline
1989 & 1676.12 & 4.87 \\
2000 & 1289.31 & 3.74 \\
2005 & 1009.92 & 2.93 \\
2010 & 838.14 & 2.43 \\
2015 & 811.92 & 2.36 \\
2020 & 705.78 & 2.05 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Areaal coverage of wetlands in NWBD
Figure 2: Wetland change trends in NW Bangladesh.
### Wetland vulnerability assessment
The retrieved results have been employed to evaluate the vulnerabilities for these wetlands with help of statistical analyses A landscape indicator chart for wetlands sustainability is also developed in this regard.
#### 4.2.1 The inter-annual variation of wetlands
The inter-annual variation for surface water extent of wetlands for individual district, and the NWBD as a whole have been examined using the Coefficient of Variation (CV%). The CV% (Eq. 2) is a statistical indication of inter-annual variation for the variables ([PERSON], 2000). High value of CV% indicates more variability (i.e., extreme variability in wetland water surface areal fluctuations) and low CV% indicates less variability (i.e., constant water surface and minimal fluctuations) on a period-to-period basis.
The wetland areal coverage of individual district extracted from the Landsat imageries has been used to calculate the CV% for the period 1989-2020. Table 6 illustrates CV% for whole NW region along with its 5 districts. The overall result of CV% of NW region was 35.18% which indicates relatively low degree of variability during the study period. In contrast, the CV% for individual districts' wetland system presents more detail of the variability of wetland areal coverage. The result reveals CV% ranges from 78.43% for Bogura district (high variability) to 34.10% for Rajshaishi district (moderate variability). In case of individual CV%, Bogura and Rangur districts (78.43% and 71.8% correspondingly) exhibit a very high fluctuations of wetland areal coverage, when other three districts namely Rajshaishi, Pabna, and Dinajpur (34.10%, 46.86%, and 50.02% correspondingly) reveal mild to moderate variability. This range of variability suggests that wetland areal coverage in some districts can double or halve in size seasonally or from period-to-period. These MNDWI retrieved results provide a clear trail of individual spatial and temporal wetland areal coverage fluctuations in NWBD.
#### 4.2.2 Landscape indicators
Landscape indicators are used to accomplish landscape change analysis. To assess the change on wetland sustainability in NWBD, landscape change analysis has been executed using the landscape indicators: Landmass area, total wet area, wet area density, wet area/landmass, wetland areas, wetland density, average wetland area size, wetland/landmass, river areas, river density, river/wetland, and wet/non-wet area. In Table 7 the details have been explicated. The analysis result of landscape indicators from 1989 to 2020 reveal that among the 12 indicators only landmass areas exhibit increasing trends, when all other 11 indicators are showing declining trends. Here, landmass indicates the built-up areas along with non-water areas like agriculture fields, falllow lands, forests etc. However, these decreasing trends of indicators imply a very unsustainable hydrological condition in NWBD which could be a big threat for existence of wetlands in near future.
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Name of Area & Mean & Maximum & Minimum & Std. Dev & C.V (\%) \\ \hline Northwest BD & 1069.80 & 1715.26 & 721.43 & 376.38 & 35.18 \\ Bogura & 179.57 & 402.76 & 51.84 & 140.83 & 78.43 \\ Dinajpur & 302.78 & 353.3 & 47.08 & 101.42 & 50.02 \\ Rajshaishi & 342.72 & 508.35 & 220.16 & 161.71 & 34.10 \\ Rangur & 279.22 & 580.22 & 33.93 & 199.30 & 71.38 \\ Patna & 65.95 & 102.52 & 22.45 & 30.90 & 46.86 \\ \hline \multicolumn{7}{l}{*Unit in km*2} \\ \end{tabular}
\end{table}
Table 6: Coefficient of variation (%) in wetland areal coverage
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{**Time Period**}} & \multicolumn{6}{c}{**Wetland ChangeChangeChange in Percentage**} & \multicolumn{6}{c}{**Wetland Change/Year**} \\ \multirow{2}{*}{**From**} & \multirow{2}{*}{**To**} & \multirow{2}{*}{**Interval**} & \multirow{2}{*}{**(km*)**} & \multirow{2}{*}{**(\%)**} & \multirow{2}{*}{**(\%)**} & \multirow{2}{*}{**Change/Year**} \\ & & & & & & & & (km*) & & (km*) \\ \hline
1989 & 402.76 & 10.43 & 186.3 & 2.81 & 453.61 & 4.82 & 580.22 & 5.99 & 92.37 & 1.90 \\
**2000** & 190.37 & 4.93 & 353.3 & 5.34 & 231.8 & 2.46 & 446.15 & 4.61 & 79.37 & 1.64 \\
**2005** & 282.43 & 7.31 & 243.54 & 3.68 & 220.16 & 2.34 & 170.14 & 1.76 & 102.52 & 2.11 \\
**2010** & 59.54 & 1.54 & 158.45 & 2.39 & 318.7 & 3.38 & 251.03 & 2.59 & 57.42 & 1.18 \\
**2015** & 51.84 & 1.34 & 228.02 & 3.45 & 321.4 & 3.41 & 193.87 & 2.00 & 22.45 & 0.46 \\
**2020** & 90.5 & 2.34 & 47.08 & 0.71 & 508.35 & 5.40 & 33.93 & 0.35 & 41.57 & 0.86 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Districts wise wetland distribution in NW BD (1989-2020)
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline \multicolumn{1}{c}{\multirow{2}{*}{**Assessment Area**}} & \multirow{2}{*}{**1989**} & \multirow{2}{*}{**2000**} & \multirow{2}{*}{**2005**} & \multirow{2}{*}{**2010**} & \multirow{2}{*}{**2015**} & \multirow{2}{*}{**2020**} \\ \hline Landmass Areas law* & 31824.9 & 32614.7 & 3271.01 & 33044.5 & 33083.5 & 33210.9 \\ Total Wet Areas (km*) & 2612.47 & 1822.68 & 1727.29 & 1392.93 & 1353.85 & 1226.45 \\ Wet Areas Density & 0.75861 & 0.652927 & 0.050157 & 0.04048.0 & 0.039313 & 0.035614 \\ Wet areas/Landmass & 0.82089 & 0.058588 & 0.052860 & 0.042153 & 0.040922 & 0.036929 \\ Wetland Areas & 1676.12 & 1289.
## 5 Conclusion
Present study evaluated the physical vulnerability of wetlands in NWBD. The employed methods reveal mass destruction of wetlands has been taking place in NWBD since 1989 till 2020. Within these 31 years of time, at a rate of 31.30 km\({}^{2}\)year 57.89% of the total wetland areas have been disappeared from study area which is around 9700.34 km\({}^{2}\) in areal extent. District of Rangpur, Bogura, and Dinajpur have lost the highest areas of wetlands respectively. However, Bogura was found most vulnerable, followed by Rangpur, and Dinajpur districts. Agriculture as a dominant phenomenon, has been apprehending and changing the land use and landcover types in the area. The huge growth of population, rapid urbanization along with its economic growth has aggravated the condition. Considering the threats and the value of wetlands it now very imperative to take necessary measures for the conservation of the existing wetlands of NWBD. Though within the scope of this study few parameters have been considered, however, it is expected that this study would help the government organizations and the policymakers to develop a national standard vulnerability assessment strategy adding more required parameters.
## References
* [PERSON], [PERSON], and [PERSON] (2013)Estimation of the changes of wetlands in the northwest region of Bangladesh using landassi images. 29, pp. 1-6. External Links: Document Cited by: SS1.
* [PERSON] and [PERSON] (2000)Global characteristics of stream flow seasonality and variability. Journal of Hydoreeorology47 (2), pp. 298-310. External Links: Document Cited by: SS1.
* [PERSON]. [PERSON] (2009)Wetlands and global climate change: the role of wetland restoration in a changing world. Weirals Ecology and Management17 (1), pp. 17-44. External Links: Document Cited by: SS1.
* [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] (2022)Monitoring of land use and land cover changes by using remote sensing and GIS techniques at human-induced mangrove forests areas in Bangladesh. Remote Sensing Applications: Society and Environment2, pp. 100699. External Links: Document Cited by: SS1.
* [PERSON] and [PERSON] (2019)Urban expansion induced vulnerability assessment of East Kolkata Wedand using Fuzzy MCDM method. Remote Sensing Applications: Society and Environment13 (1), pp. 191-203. External Links: Document Cited by: SS1.
* [PERSON], [PERSON], and [PERSON] (2011)A framework for assessing the vulnerability of wetlands to climate change. Cited by: SS1.
* [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] (2022)Spatio-temporal changes of land use land cover and ecosystem service values in coastal Bangladesh. The Egyptian Journal of Remote Sensing and Space Science25 (1), pp. 173-180. External Links: Document Cited by: SS1.
* [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] (2021)Machine learning algorithm-based risk assessment of riparian wetlands in Padma River Basin of Northwest Bangladesh. Environmental Science and Pollution Research28 (26), pp. 34450-34471. External Links: Document Cited by: SS1.
* [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] (2022)Leveraging the historical landsat Catalog for a Remote Sensing Model of Wetland Aceration in Coastal Louisiana. Journal of Geophysical Research: Biogeosciences127 (6), pp. 2022J0606794. External Links: Document Cited by: SS1.
* [PERSON] and [PERSON] (2012)Change detection techniques for land cover change analysis using spatial datasets: a review. Remote Sensing in Earth Systems Sciences4 (3), pp. 172-185. External Links: Document Cited by: SS1.
* [PERSON] and [PERSON] (2017)Vulnerability assessment of wetland landscape ecosystem services using driver-pressure-state-impact-response (DPSIR) model. Ecological Indicators82 (2), pp. 293-303. External Links: Document Cited by: SS1.
* [PERSON], [PERSON], and [PERSON] (2021)Impact of urbanization on land use and land cover change in Gawhati city. India and its implication on declining groundwater level. Groundwater for Sustainable Development12 (1), pp. 100500. External Links: Document Cited by: SS1.
* [PERSON]. [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] (2020)Examining the nexus between land surface temperature and urban growth in Chattoparan Metropolitan Area of Bangladesh using long rent Landsat data. Urban Climate32 (10), pp. 100593. External Links: Document Cited by: SS1.
* [PERSON] and [PERSON] (2019)Exploring physical wetland vulnerability of Atreyrey river basin in India and Bangladesh using logistic regression and fuzzy logic approaches. Ecological Indicators98, pp. 251-265. External Links: Document Cited by: SS1.
* [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] (2021)Spatio-temporal change analysis of three floodplain wetlands of eastern India in the context of climatic anomaly for sustainable fisheries management. Sustainable Water Resources Management7 (3), pp. 1-16. External Links: Document Cited by: SS1.
* [PERSON], [PERSON], [PERSON], and [PERSON] (2013)Estimation of the changes of wetlands in the northwest region of Bangladesh using landassi images: paper presented at the 4 th International Conference on Water & Flood Management (ICWRT-2013). External Links: Document Cited by: SS1.
* [PERSON] (2019)Past and future trajectories of farmland loss due to rapid urbanization using Landsat imagery and the Markov-Cox model: a case study of Delhi. India. Remote Sensing11 (2), pp. 180. External Links: Document Cited by: SS1.
* [PERSON]. [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] (2000)Ecological-economic analysis of wetlands: scientific integration for management and policy. Ecological Economics35 (1), pp. 7-23. External Links: Document Cited by: SS1.
* [PERSON] (2006)Modification of normalised difference water index (NDW1) to enhance open water features in remotely sensed imagery. International Journal of Remote Sensing27 (14), pp. 3025-3033. External Links: Document Cited by: SS1.
* [PERSON]. [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] (2020)Four decades of wetland changes in Dongting Lake using Landsat observations during 1978-2018. Journal of Hydrology587, pp. 124954. External Links: Document Cited by: SS1.
* [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] (2021)Vulnerability assessment of management planning for the ecological environment in urban wetlands. Journal of Environmental Management298 (11), pp. 113540. External Links: Document Cited by: SS1.
|
isprs
|
GEOSPATIAL APPROACH TO WETLAND VULNERABILITY ASSESSMENT FOR NORTHWEST BANGLADESH
|
A. B. M. A. Haider, M. I. B. Hassan, A. W. B. Rasib
|
https://doi.org/10.5194/isprs-archives-xlviii-4-w6-2022-139-2023
| 2,023
|
CC-BY
|
isprs/0c7e3472_9713_44c9_8a73_3a1a7e3a7d68.md
|
the other hand, ([PERSON] et al., 2021) developed an original simulator of multi-temporal aerial LiDAR urban point clouds. The simulator was used to automatically build an annotated 3D CD dataset consisting of pairs of 3D point clouds labelled according to the synthetic changes imposed by the authors. Six different 3D CD methods were assessed either by directly coping with the 3D point clouds or using the Digital Surface Models (DSMs) generated from their rasterization ([PERSON] et al., 2021). In particular, the authors compared traditional methods, such as the use of different types of thresholding and filtering algorithms on DSMs, with both a machine learning algorithm (a random forest fed with hand-crafted features) and two DL networks (consisting in a feed-forward network (FFN) and a Siamese network).
Our work, however, goes one step further by creating and sharing a dataset in which a 3D CD map, i.e. a map containing the change in elevation, is provided together with the 2D CD map and the corresponding pair of optical images. The main contribution behind the production of this dataset is to allow the development of DL algorithms that can automatically generate 3D CD maps using two aerial or satellite optical images acquired in different epochs as input, without the need of DSMs, as it will be highlighted in Section 3.
## 2 Related Datasets
As already pointed out in the introduction (Section 1), to design and build the proposed dataset, we considered the existing literature and, in particular, the open EO CD datasets already available to the scientific community. Particularly, we analysed the main features of the open CD datasets explicitly designed for the development of DL algorithms and containing optical images, annotated with 2D change maps, and/or LiDAR point clouds, from which 3D changes can be deduced. In general, there are many datasets that contain optical images, and thus suited to perform 2D CD tasks, less that include LiDAR PCs. Concerning this issue, the annotated 3D CD dataset released by ([PERSON] et al., 2021) - built through a simulator that introduces synthetic changes to LiDAR point clouds - could be an effective solution. However, at the best of our knowledge, no CD dataset containing both optical images, 2D CD maps and information about the corresponding elevation changes is currently available.
Among the 2D CD datasets designed for the development of DL algorithms, the SZTAKI Air change benchmark ([PERSON] and [PERSON], 2009, [PERSON] and [PERSON], 2008) was one of the first openly available and it is currently one of the most used in the RS community. It consists of 13 aerial image pairs, provided by the Hungarian Institute of Geodesy, Cartography and Remote Sensing or retrieved from Google Earth. The images have a spatial resolution of 1.5 m and each pixel is labelled as subjected to change or not.
The Semantic Change detectiON Dataset (SECOND) ([PERSON] et al., 2021) is a pixel-level annotated semantic CD dataset, which includes 4662 pairs of aerial images, with a size of 512\(\times\)512 pixels, acquired from different platforms and sensors, covering three Chinese cities. It is annotated with 6 LULC classes: (i) non-vegetated area, (ii) trees, (iii) low vegetation, (iv) water, (v) buildings and (vi) playing fields.
The Onera Satellite Change Detection (OSCD) ([PERSON] et al., 2018b) is a dataset composed of 24 multispectral aerial image pairs acquired by Sentinel-2, manually annotated as subjected to change or not at pixel-level.
The Deeply Supervised Image Fusion Network (DSIFN) ([PERSON] et al., 2020) is a DL method, proposed along with a dedicated dataset for the validation task. It is composed of 6 high resolution bi-temporal images, extracted from Google Earth. Specifically, it is made of 3600 image tile pairs for training, 340 for validation and 48 for testing. All the image tiles are characterised by a size of 512\(\times\)512 pixels.
The Sentinel-2 Multitemporal Cities Pairs (S2 MTCP) ([PERSON] et al., 2021) consists of 1520 pairs of Sentinel-2, level 1C, images covering different urban areas around the world, with a spatial resolution of 10 m and a size of 600\(\times\)600 pixels. This dataset was originally used in the paper for the self-supervised training step. The trained model was then validated on the aforementioned OSCD ([PERSON] et al., 2018b).
The Sun Yat-Sen University Change Detection (SYSU-CD) ([PERSON] et al., 2021) dataset was built to validate a deeply supervised (DS) attention metric-based network (DSAMNet). It consists of 20000 pairs of 0.5 m aerial images of size 256\(\times\)256 captured between 2007 and 2014 in Hong Kong. The dataset is annotated with six classes of LULC changes: (i) new urban buildings; (ii) suburban expansion; (iii) pre-construction earthworks; (iv) vegetation change; (v) road expansion; (vi) sea construction.
The S2 Looking ([PERSON] et al., 2021) is a building CD dataset, which consists of 5000 recorded bi-temporal image pairs of rural areas worldwide and more than 65,920 annotated change instances, indicating separately newly constructed and demolished buildings. The images are characterised by a size of 1024\(\times\)1024 pixels, with a spatial resolution ranging from 0.5 to 0.8 m/pixel.
Finally, the dataset provided in ([PERSON] et al., 2018) is a synthetic database containing 12,000 triples of synthetic images without object shift, 12,000 triples of model images with object shift and 16,000 triples of real RS image fragments.
## 3 Dataset Description
The proposed dataset covers the urban area of the city of Val-ladolid in Spain (Figure 1). The area of interest includes the historical and urban centre of the city and the surrounding commercial areas. The agricultural areas were not considered since no significant changes in elevation were found for these areas. Moreover, we selected and annotated only the changes affecting artificial manufacts, such as the construction and the demolition of buildings.
In particular, the dataset contains 472 (i) pairs of images cropped from optical orthophots acquired through two different aerial surveys, performed respectively in 2010 and in 2017, (ii) the corresponding LULC variation maps in raster format, i.e. the 2D CD maps, and (iii) the corresponding elevation vari
Figure 1: Area included in the dataset
ation maps in raster format, namely the 3D CD maps. The images contain three bands corresponding to the Red, Green and Blue channels. The main features of the data contained in the proposed dataset are described in Table 1.
To build the dataset, we started from several pairs of aerial orthophotos freely available in the website of the (Organism Autonomo Centro Nacional de Informacion Geografica, 2021), acquired in 2010 and in 2017, and covering the area of Valladiod. The original orthophotos are characterised by a Ground Sample Distance (GSD) of 0.25 m. To produce the DSMs needed for the generation of the 3D CD maps, we exploited the LiDAR data freely available in the website of (Organism Autonomo Centro Nacional de Informacion Geografica, 2021) for the same years and the same area. The DSMs were produced within QGIS (QGIS, 2022b), by rasterizing the original point clouds contained in the LAS files. The GSD of the DSMs is 1 m.
The first step of the dataset preparation was an automatic pre-processing phase, in which the images, both the optical ones and the corresponding DSMs, were cropped into smaller tiles covering a size of 200 m \(\times\) 200 m. In addition, to make the GSD of the orthophotos more similar to the GSD of the DSMs, the orthophotos were downsampled, degrading their GSD from 0.25 m to 0.50 m. At the end of this operation, 472 pairs of orthophotos with a size of 400\(\times\)400 pixels were produced together with the corresponding pairs of DSMs with a size of 200\(\times\)200 pixels. An example is shown in Figure 2.
After the aforementioned automatic phase of tile cropping, the raster 3D CD maps containing the elevation changes were generated through a simple difference between the DSMs (Figure 3):
\[\Delta H=H_{2017}-H_{2010} \tag{1}\]
In particular, we considered all the elevation changes characterised by values lower than one metre in absolute value as negligible with respect to the entity of elevation variations usually affecting buildings, and for this reason they were ignored (i.e. their value was set to zero). Then, a manual control step was carried out on the resulting 3D CD maps: by means of a visual comparison with the corresponding pair of orthophototos, only the pixels affected by a real change in elevation were considered, while the pixels in which no real change had occurred, and thus containing only noise, were set equal to zero. An example can be observed in Figure 4. A further check was carried out to assess the absence of coregistration errors, both in the orthophotos and in the DSMs.
Thus, for each pair of optical images, two CD maps were produced, focusing on the changes affecting only artificial mann-facts (i.e. buildings, Figure 4). The first one is the 2D CD map, in which we annotated the pixels belonging to areas where a change in elevation occurred. These maps were constructed using the software QGis (QGIS, 2022a), taking the 400\(\times\)400 orthophotos as reference and comparing them with the DSMs difference maps. Then, pixels belonging to the areas affected by a change in elevation over the years were delineated. In particular, two classes were identified: (i) no change; (ii) changes due to construction (positive elevation change) or demolition (negative elevation change) of artefacts/buildings. The 2D CD maps are characterised by the same resolution (400\(\times\)400 pixels) of the orthophotos from which they derive, with a GSD of 0.50 m (Figure 5).
The second CD map is the 3D CD map (Figure 4), obtained from the difference between the DSMs as aforementioned, with a resolution of 200\(\times\)200 pixels and a GSD of 1 m.
Figure 4: 3D change detection map obtained through difference of the DSMs (see Figure 3), after the manual removal of the noise. The colour bar is expressed in meters
Figure 3: 3D change detection map obtained through difference of the DSMs. The colour bar is expressed in meters
Figure 2: Example of the data used to produce the dataset; a:
Otrophoto referring to 2010; b: Otrophoto referring to 2017; c: DSM referring to 2010; d: DSM referring to 2017 Once the dataset was produced, the 472 quadruplets of images (Figure 9: two ortoptop to tiles - one for the 2010 and one for the 2017 -, one 2D CD map and one 3D CD map) were divided into _train_, _test_ and _validation_ (val) subsets to permit their direct use for benchmarking, hence avoiding reproducibility issues potentially deriving from a random split of the dataset. Specifically, the division was carried out to ensure that the percentage of pixels with and without variations was similar in all the three subsets (Table 2), assuring also that the images included in the train subset contained pixels with all the elevation variation values (ranging from -25 m to 35 m, Figure 6). In particular, the train subset contains 320 images (\(\sim 68\%\)), the test subset 110 images (\(\sim 23\%\)) and the validation subset 42 images (\(\sim 9\%\)). Finally, Table 2 shows the percentages, averaged over all the images contained in each subset, of the pixels affected by change and the pixels where there was no change over the years.
## 4 Further developments
To complete the proposed dataset, we are currently developing baseline algorithms that can solve the 3D CD task. In particular, we are considering some families of models that can approach the 3D CD task simultaneously with the 2D CD task. This strategy, in fact, would allow to output two masks containing a more complete information, useful for different RS applications, such as those reported in Section 1. In particular, we are testing models based on a Siamese U-Net network, similar to the one developed in [1]. However, as previously stated, the models we are developing will differ from the above-mentioned models, as the loss will be composed of two terms: one classification term (eg.: cross entropy to solve the 2D CD task) and one regression term (eg.: mean squared error to solve the 3D CD task). Finally, an attention-based model [10] is under development as well, given the effectiveness of such family of models also for RS CD applications [11]. Moreover, we are already considering to integrate the dataset with new pairs of optical images, accompanied by the respective 2D and 3D change masks, on areas already identified and subjected to elevation variations. In conclusion, 3D CD is one of the main topics in the field of DL applied to RS, and the availability of open datasets, such as the one proposed in this work, is essential to develop and to validate algorithms able to solve this challenging task. With this contribution, we aim to show the robustness and effectiveness of the proposed dataset, emphasising its construction and validation process. Moreover, in parallel to the release of the dataset, we are developing models able to solve the 3D CD task in addition to the well studied 2D CD task. In order to support further researches on 3D CD, especially with DL methods, the dataset is publicly available at the following link: [[https://bit.ly/3](https://bit.ly/3) wDdo41]([https://bit.ly/3](https://bit.ly/3) wDdo41).
## References
* [1][[PERSON] et al.2018] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2018. Street-view change detection with deconvolutional networks. _Autonomous Robots_, 42(7), 1301-1322.
* [2][[PERSON], [PERSON]] [PERSON], [PERSON], 2022. A Transformer-Based Siamese Network for Change Detection. _CoRR_, abs/2201.01293. [[https://arxiv.org/abs/2201.01293](https://arxiv.org/abs/2201.01293)]([https://arxiv.org/abs/2201.01293](https://arxiv.org/abs/2201.01293)).
* [3][[PERSON], [PERSON]] [PERSON], [PERSON], [PERSON], 2008. A mixed markov model for change detection in aerial photos with large time differences. _2008 19 th International Conference on Pattern Recognition_, IEEE, 1-4.
* [4][[PERSON], [PERSON]] [PERSON], [PERSON], [PERSON], 2009. Change detection in optical aerial images by a multilayer conditional mixed Markov model. _IEEE Transactions on Geoscience and Remote Sensing_, 47(10), 3416-3430.
* [5][[PERSON], [PERSON]] [PERSON], [PERSON], 2013. A Novel Framework for the Design of Change-Detection Systems for Very-High-Resolution Remote Sensing Images. _Proceedings of the IEEE_, 101(3), 609-630.
* [6][[PERSON], [PERSON], [PERSON], [PERSON], Ma2018] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], C., [PERSON], [PERSON], [PERSON], [PERSON], 2018. Automated Landslides Detection for Mountain Cities Using Multi-Temporal Remote Sensing Imagery. _Sensors_, 18(3). [[https://www.mdpi.com/1424-8220/18/3/821](https://www.mdpi.com/1424-8220/18/3/821)]([https://www.mdpi.com/1424-8220/18/3/821](https://www.mdpi.com/1424-8220/18/3/821)).
* [7][[PERSON], [PERSON], [PERSON]] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], A., 2018a. Fully convolutional siamese networks for change detection. _2018 25 th IEEE International Conference on Image Processing (ICIP)_, IEEE, 4063-4067.
* [8][[PERSON], [PERSON], [PERSON], [PERSON]] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], A., [PERSON], Y., 2018b. Urban change detection for multispectral earth observation using convolutional neural networks. _IGARSS 2018-2018 IEEE International Geoscience and Remote Sensing Symposium_, IEEE, 2115-2118.
* [9][[PERSON], [PERSON], [PERSON], [PERSON]] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2019. Multitask learning for large-scale semantic change detection. _Computer Vision and Image Understanding_, 187, 102783.
Figure 9: Two examples from the proposed dataset. a) 2010 optical image, b) 2017 optical image, c) 2D CD map, d) 3D CD map, where the colour bar is expressed in metersde [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2021. Change Detection in Urban Point Clouds: An Experimental Comparison with Simulated 3D Datasets. _Remote Sensing_, 13(13). [[https://www.mdpi.com/2072-4292/13/13/2629](https://www.mdpi.com/2072-4292/13/13/2629)]([https://www.mdpi.com/2072-4292/13/13/2629](https://www.mdpi.com/2072-4292/13/13/2629)).
* [PERSON] et al. (2013) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2013. Building change detection from multitemporal high-resolution remotely sensed images based on a morphological building index. _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_, 7(1), 105-115.
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2017. Forest Change Detection in Incomplete Satellite Images With Deep Neural Networks. _IEEE Transactions on Geoscience and Remote Sensing_, 55(9), 5407-5423.
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2018. CHANGE DETECTION IN REMOTE SENSING IMAGES USING CONDITIONAL ADVERSARI NETWORKS. _International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences_, 42(2).
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2021. Self-supervised pre-training enhances change detection in sentinel-2 imagery. _International Conference on Pattern Recognition_, Springer, 578-590.
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2018. Long-Term Annual Mapping of Four Cities on Different Continers by Applying a Deep Information Learning Method to Landsat Data. _Remote Sensing_, 10(3). [[https://www.mdpi.com/2072-4292/10/3/471](https://www.mdpi.com/2072-4292/10/3/471)]([https://www.mdpi.com/2072-4292/10/3/471](https://www.mdpi.com/2072-4292/10/3/471)).
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2019. Deep learning in remote sensing applications: A meta-analysis and review. _ISPRS journal of photogrammetry and remote sensing_, 152, 166-177.
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], 2021. MARE: Self-Supervised Multi-Attention REsu-Net for Semantic Segmentation in Remote Sensing. _Remote Sensing_, 13(16), 3275. [[https://www.mdpi.com/2072-4292/13/16/3275](https://www.mdpi.com/2072-4292/13/16/3275)]([https://www.mdpi.com/2072-4292/13/16/3275](https://www.mdpi.com/2072-4292/13/16/3275)).
* [PERSON] and [PERSON] (2018) [PERSON], [PERSON], 2018. IM2 HEIGHT: Height estimation from single monocular imagery via fully residual convolutional-deconvolutional network. _arXiv preprint arXiv:1802.10249_.
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON], 2019. Airborne lidar change detection: An overview of Earth sciences applications. _Earth-Science Reviews_, 198, 102929.
* Organismo Autonomo Centro Nacional de Informacion Geografica (2021) Organismo Autonomo Centro Nacional de Informacion Geografica, 2021. Digital elevation models and maps in image format. [[http://centrodedescargas.cnig.es/CentroDescargas/buscadorCatalog.do7](http://centrodedescargas.cnig.es/CentroDescargas/buscadorCatalog.do7) codFamilia=LIDA#]([http://centrodedescargas.cnig.es/CentroDescargas/buscadorCatalog.do7](http://centrodedescargas.cnig.es/CentroDescargas/buscadorCatalog.do7) codFamilia=LIDA#).
QGIS, 2022a. Qgs documentation. [[https://docs.ggsis.org/3.22/en/docs/index.html](https://docs.ggsis.org/3.22/en/docs/index.html)]([https://docs.ggsis.org/3.22/en/docs/index.html](https://docs.ggsis.org/3.22/en/docs/index.html)).
QGIS, 2022b. QGIS documentation: DEM from LiDAR Data. [[https://docs.ggsis.org/3.22/en/docs/training_manual/forestry/basic_lidar.html](https://docs.ggsis.org/3.22/en/docs/training_manual/forestry/basic_lidar.html)]([https://docs.ggsis.org/3.22/en/docs/training_manual/forestry/basic_lidar.html](https://docs.ggsis.org/3.22/en/docs/training_manual/forestry/basic_lidar.html)).
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2016. 3D change detection-approaches and applications. _ISPRS Journal of Photogrammetry and Remote Sensing_, 122, 41-56.
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], Y., [PERSON], H., [PERSON], H., [PERSON], D., [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2021. S2 Looking: A Satellite Side-Looking Dataset for Building Change Detection. _Remote Sensing_, 13(24), 5094.
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], M., [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], L., 2021. A deeply supervised attention metric-based network and an open aerial image dataset for remote sensing change detection. _IEEE Transactions on Geoscience and Remote Sensing_.
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2019. Comparative analysis of machine learning and point-based algorithms for detecting 3D changes in buildings over time using bi-temporal lidar data. _Automation in Construction_, 105, 102841.
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2017. Attention is all you need. _Advances in neural information processing systems_, 5998-6008.
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2021. Asymmetric Siamese Networks for Semantic Change Detection in Aerial Images. _IEEE Transactions on Geoscience and Remote Sensing_.
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2020. A deeply supervised image fusion network for change detection in high resolution bi-temporal remote sensing images. _ISPRS Journal of Photogrammetry and Remote Sensing_, 166, 183-200.
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2017. Deep learning in remote sensing: A comprehensive review and list of resources. _IEEE Geoscience and Remote Sensing Magazine_, 5(4), 8-36.
|
isprs
|
3DCD: A NEW DATASET FOR 2D AND 3D CHANGE DETECTION USING DEEP LEARNING TECHNIQUES
|
V. Coletta, V. Marsocci, R. Ravanelli
|
https://doi.org/10.5194/isprs-archives-xliii-b3-2022-1349-2022
| 2,022
|
CC-BY
|
isprs/0fdcd1a0_a2d8_4bf8_bd61_627ec5f738c1.md
|
Earthquake Building Damage Mapping Based on Feature Analyzing Method from Synthetic Aperture Radar Data
[PERSON]
1 Institute of Engineering Mechanics, CEA. Key Laboratory of Earthquake Engineering and Engineering Vibration, CEA Institute of Crustal Dynamics, CEA Earthquake Administration of Tianjin Municipality
[PERSON]
2 Institute of Engineering Mechanics, CEA. Key Laboratory of Earthquake Engineering and Engineering Vibration, CEA Institute of Crustal Dynamics, CEA Earthquake Administration of Tianjin Municipality
[PERSON]
2 Institute of Engineering Mechanics, CEA. Key Laboratory of Earthquake Engineering and Engineering Vibration, CEA Institute of Crustal Dynamics, CEA Earthquake Administration of Tianjin Municipality
###### Abstract
Playing an important role in gathering information of social infrastructure damage, Synthetic Aperture Radar (SAR) remote sensing is a useful tool for monitoring earthquake disasters. With the wide application of this technique, a standard method, comparing post-seismic to pre-seismic data, become common. However, multi-temporal SAR processes, are not always achievable. To develop a post-seismic data only method for building damage detection, is of great importance. In this paper, the authors are now initiating experimental investigation to establish an object-based feature analysing classification method for building damage recognition.
Building Damage Assessment, Earthquake Emergency, Feature Analysing, SAR
## 1 Introduction
Remote observation has become an essential method for data collection in the initial stage of earthquake relief, since it can provide disaster information on a large scale objectively and effectively. Nowadays, remote observation for seismic damage information takes advantage of mainly two types of sensors: optical sensors which observe reflective and radiometric characteristics passively, and radar sensors which actively emit microwaves. High-resolution optical sensor images can be used to assess damage at building level or to evaluate damage to ground. Unfortunately, this method is not applicable in regions with incidence of cloud cover or snow. On the other hand, radar sensors can observe the ground irrespective of weather conditions or the time of the day, and therefore have been gaining prominence as a reliable tool for grasping the overall picture of damage from disasters.
For rapid damage assessment based on SAR data, there are mainly three techniques:
1)interpretation of a single post-event very-high-resolution(VHR) SAR data;
2)change detection of pre-event and post- event data pair with same sensor;
3)combination of optical data and post-event SAR data
In recent years, some researchers have started to apply object-oriented image analysis techniques to building damage information extraction based on SAR data. [PERSON] and [PERSON] (2016) developed an object-based method to estimate building damage from TerraSAR-X. [PERSON] and [PERSON] (2017) proposed an object-based building damage assessment method that use only post event ALOS-2/PALSAR dual polarimetric SAR intensity image and they also applied machine learning method in SAR image based building mapping.
In this article, the authors analysed the interpretation characteristics of damaged buildings in high resolution SAR image and made a statistic analysis on the texture feature of block scale image object, in the end, preformed an experiment of applying fuzzy classification method to block scale building damage estimation.
## 2 Analysis of Image Features of Damaged Buildings
### General Study
To identify damaged buildings from high resolution SAR image, firstly, the overall destruction of buildings should be analysed from the direction of the building and the geometric features of SAR system, and factors of the surrounding scene should be considered. Finally, damage degree is analysed with scattering characteristics.
Figure 1. Damaged buildings in different remote sensors. a: aerial photo, b: Cosmo-Skymed SM, c: TerraSAR-X spotlight, d: Cosmo-Skymed SP
The offset or interruption of the linear characteristics of buildings in SAR images (such as shadow or overlay) can reflect different levels of damage, which is mainly caused by the loss of elevation. High resolution SAR image can capture more detail information of buildings, so that the earthquake damage can be recognized. It can also detect small damage targets, such as debris that can be reflected in the image. The strong scattering of exposed reinforced materials can be used as a seismic criterion.
Figure 2. Offset of linear characteristic of a damaged building in post-event SAR image. a: aerial photo, b: TerraSAR-X spotlight
Figure 3. A cartoon profile for figure2
As described in Figure 3, the partially collapsed building resulted in an interruption of the linear characteristics in the image of the right look descending SAR. The authors can draw a conclusion from the imaging mechanism that because of the blocking from the right building, the left building did not form an overlay in that image. As its top is not smooth, there occurred diffuse reflection. The change of elevation between the front and back caused an offset in the shadow region, which can be used as a seismic hazard criterion of the image.
### Texture Feature of SAR Image
For SAR data, image grayscale distribution expresses radar backscatter features. Different targets with similar scattering coefficient in the SAR images could show the similar grey value. As a result, SAR image always appear in complex grey level. Therefore, it is possible to improve the analysis precision by using the texture information of the target to compensate the deficiency of grey features in SAR image analysis. Several experiments indicated that the texture feature of SAR image is the effective feature to describe the overall distribution of all kinds of objects in the image.
There are many definitions of texture of image, and it is generally believed that texture features are expressed through the grayscale distribution of the neighbourhood of the pixel and its surrounding space. The commonly used description methods include Grey Level Co-occurrence Matrices (GLCM), which is also commonly applied in object - oriented image analyzing. [PERSON] (1973) points out that the use of gray level co-occurrence matrix statistics as space relationship of texture feature of image pixels and puts forward the 14 GLCM texture characteristics. In this article, several specifications of GLCM texture feature the authors chose are as follows:
In the formula, _i, j_ are representing the grayscale value of the pixel, \(\overset{P}{\underset{i}{\cdot}}\) is the calculation value in the symbiotic matrix.
1.Entropy
\[\begin{split}\boldsymbol{e}=\sum_{i,j=0}^{N-1}P_{i,j}(-\ln\ \boldsymbol{P}_{i,j})^{2}\end{split} \tag{1}\]
Entropy measures the uniformity of texture distribution, the disorder of image. There is no specific spatial shape of the weight of entropy, which is determined by the value of the value in the symbiotic matrix and is independent of its position in the matrix. This characteristic parameter is used to describe the overall nature of gray change in the region, and the entropy value is small when the gray degree changes in the region are intensive.
2.Ang. Second Moment, ASM
\[\begin{split}\boldsymbol{a}=\sum_{i,j=0}^{N-1}\boldsymbol{P}_{i,j}^{2}\end{split} \tag{2}\]
ASM describes the distribution uniformity of image grayscale. It can be used to detect the global homogeneity of texture.
3.Contrast
\[\begin{split}\boldsymbol{c}\ =\ \sum_{i,j=0}^{N-1}P_{i,j}(\hat{I}\ -\ \hat{J})^{2}\end{split} \tag{3}\]
It measures local grayscale changes and the degree of local change of image. It can be understood as the sharpness of image and can be used to determine texture strength. The greater the weight of contrast, the greater the power of the diagonal, the greater the value of the contrast in the high contrast region.
4.Inverse Difference Moment, IDM
\[\begin{split}\boldsymbol{IDM}=\sum_{i=0}^{G-1}\overset{G-1}{ \underset{j=0}{\overset{G-1}{\sum}}}\frac{1}{1+(i-j)^{2}}\boldsymbol{P}(i,j) \end{split} \tag{4}\]
IDW is also affected by the homogeneity of images. Because of the weight, IDM will decrease when the area is not uniform. On the other hand, the higher regions of IDM represent better images of homogeneity.
### Statistical Analysis
In the segmentation process of image object-oriented classification, the authors choose to use the block as the object unit to carry on the statistics of texture feature. The high-resolution SAR data is mainly used in TerraSAR-X spotlight mode image which captured Dujiangra area after the Great Wenchuan Earthquake. By comparing the aftershock ground survey data and aerial photo, the authors selected block samples of different damage degree. As shown in Figure 4 and Figure 5. Then the authors calculated four main texture features based on GLCM for feature combination.
## 3 Feature Analyzing Based Fuzzy Classification
### Study Area
Our study area is the city of Dujiangyan, about 5.5 km from the epicentre of Great Wenchuan Earthquake. Figure 7 shows a
Figure 4: Building samples in SAR image: intact
Figure 5: Building samples in SAR image: severely damaged
Figure 6: Feature statistics of intact and severely damaged buildings
Figure 7: TerraSAR-X data used in study and study area
## 4 Conclusion and Discussion
From our experiments, the authors concluded that those four texture features are effective to extract seismic damage building information.
The limitations of this study are clear. The total sample selection needs to be further improved. While, a proper choice of identifying earthquake damage in a whole large region, is to use part of the disaster area for classification feature selection, because similar buildings have relatively close to the geometrical characteristics in the whole region and the texture features.
Furthermore, to what extent the structure type of buildings and the spatial distribution of buildings had influence texture characteristics remains to be determined.
## Acknowledgements
This work is supported by the \"Science of Earthquake Resilience Program of China Earthquake Administration\" (XH16005Y).
## References
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON], et al. [PERSON]. Feature analyzing Based Building Damage Mapping from the ALOS-2/PALSAR-2 SAR Imagery: Case Study of 2016 Kumamoto Earthquake. _Journal of Disaster Research_, Vol.12 No.sp pp. 646-655.
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON] Markovian change detection of urban areas using very high resolution complex SAR images. _Groscience and Remote Sensing Letters, IEEE_, 2014, 11(5): 995-999.
* [PERSON] et al. (2012) [PERSON], [PERSON] Multitemporal space borne SAR data for urban change detection in China. _Selected Topics in Applied Earth Observations and Remote Sensing, IEEE Journal of_, 2012, 5(4): 1087-1094.
* [PERSON] and [PERSON] (2009) [PERSON], [PERSON], 2009. Assessing different remote sensing techniques to detect land use/cover changes in the eastern Mediterranean. _International Journal of Applied Earth Observation and Geoinformation_ 11, 46-53.
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], et al. Object-Based Building Damage Assessment Methodology Using Only Post Event ALOS-2/PALSAR-2 Dual Polarimetric SAR Intensity Images. _Journal of Disaster Research_, 2017, 12(2):259-271.
* [PERSON] et al. (2005) [PERSON], [PERSON], [PERSON], 2005. A Context-Sensitive Bayesian Technique for the Partially Supervised Classification of Multitemporal Images. _IEEE Geoscience and Remote Sensing Letters_ 2, 352-356.
* [PERSON] (2010) [PERSON], 2010. Change Detection in Satellite Images Using a Genetic Algorithm Approach. _IEEE Geoscience and Remote Sensing Letters_ 7, 386-390.
* [PERSON] et al. (2012) [PERSON], [PERSON] [PERSON] Remote Sensing and Earthquake Damage Assessment: Experiences, Limits, and Perspectives. In: _Proceedings of the IEEE_, 2012, 100(10):2876-2890.
* [PERSON] et al. (2008) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2008. Monitoring Urban Land Cover in Rome, Italy, and Its Changes by Single-Polarization Multitemporal SAR Images. _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing_ 1, 87-97.
* [PERSON] et al. (2016) [PERSON] [PERSON], [PERSON] [PERSON], [PERSON] [PERSON] Object-Based Method for Estimating Tsunami-Induced Damage Using TerrSAR-X Data (Special Issue on Disaster and Big Data). _Journal of Disaster Research_, 2016, 11.
* [PERSON] and [PERSON] (2003) [PERSON], [PERSON], 2003. A Multiscale Object-Specific Approach to Digital Change Detection. _International Journal of Applied Earth Observation and Geoinformation_ 4, 311-327.
* [PERSON] (2007) [PERSON] Mapping damage during the Bam (Iran) earthquake using interferometric coherence. _International Journal of Remote Sensing_, 2007, 28(6): 1199-1216.
* [PERSON] et al. (2011) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], & [PERSON] [PERSON], Detection of building side-wall damage caused by the 2011 Tohoku, Japan earthquake tsunamis using high-resolution sar imagery.In: _National Conference in Earthquake Engineering_.2014
* [PERSON] and [PERSON] (2010) [PERSON] [PERSON], [PERSON] [PERSON] Building damage estimation by integration of seismic intensity information and satellite L-band SAR imagery. _Remote Sensing_, 2010, 2(9): 2111-2126.
* [PERSON] and [PERSON] (2003) [PERSON] [PERSON], [PERSON] [PERSON] Building damage mapping of the 2003 Bam, Iran, earthquake using ENVISAT/ASAR intensity imagery. _Earthquake Spectra_, 2005, 21(S1): 285-294.
* [PERSON] and [PERSON] (2000) [PERSON], [PERSON] [PERSON] Characteristics of satellite SAR images in the areas damaged by earthquakes. In: _Geoscience and Remote Sensing Symposium, 2000.Proceedings_. 6: P2693-2696.
* [PERSON] and [PERSON] (2004) [PERSON] [PERSON], [PERSON] [PERSON] Use of satellite SAR intensity imagery for detecting building areas damaged due to earthquakes. _Earthquake Spectra_, 2004, 20(3): 975-994.
* [PERSON] et al. (2002) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2002. Unsupervised changedetection methods for remote-sensing images. _Optical Engineering_ 41, 3288-3297.
* [PERSON] and [PERSON] (2006) [PERSON], [PERSON], [PERSON], [PERSON], 2006. Generalized minimum-error thresholding for unsupervised change detection from SAR
Figure 8: Building damage degree estimation in block scaleamplitude imagery. _IEEE Transactions on Geoscience and Remote Sensing_, 44, 2972-2982.
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON], 2015. Building Change Detection in Multitemporal Very High Resolution SAR Images. _IEEE Transactions on Geoscience and Remote Sensing_ 53, 2664-2682.
* Plank & Rapid Damage Assessment by Means of Multi-Temporal SAR -- A Comprehensive Review and Outlook to Sentinel-1. _Remote Sensing_, 2014, 6(6):4870-4906.
* (2010) [PERSON], [PERSON]. Building-damage detection using post-seismic high-resolution SAR satellite data. _International Journal of Remote Sensing_, 2010, 31(13):3369-3391.
|
isprs
|
EARTHQUAKE BUILDING DAMAGE MAPPING BASED ON FEATURE ANALYZING METHOD FROM SYNTHETIC APERTURE RADAR DATA
|
L. An, J. Zhang, L. Gong
|
https://doi.org/10.5194/isprs-archives-xlii-3-39-2018
| 2,018
|
CC-BY
|
isprs/e6862295_0f1c_4131_ace7_55a0791c47ba.md
|
Measuring Land Uses Accessibility by Using Fuzzy Majority GIS-Based Multicriteria Decision Analysis Case Study: Malayer City
[PERSON]\({}^{\,*}\)
[PERSON]\({}^{\,\
atural}\)
[PERSON]\({}^{\,\
atural}\)
[PERSON]\({}^{\,\
atural}\)
###### Abstract
Public spaces accessibility has become one of the important factors in urban planning. Therefore, considerable attention has been given to measure accessibility to public spaces on the UK, US and Canada, but there are few studies outside the anglobone world especially in developing countries such as Iran. In this study an attempt has been made to measure objective accessibility to public spaces (parks, school, library and administrative) using fuzzy majority GIS-based multicriteria decision analysis. This method is for defining the priority for distribution of urban facilities and utilities as the first step towards elimination of social justice. In order to test and demonstrate the presented model, the comprehensive plan of Malayer city has been considered for ranking in three objectives and properties in view of index per capital (Green space, sport facilities and major cultural centers like library and access index). The results can be used to inform the local planning process and the GIS approach can be expanded into other local authority domains. The results shows that the distribution of facilities in Malayer city has followed on the base of cost benefit law and the human aspect of resource allocation programming of facilities (from centre to suburbs of the city).
Footnote †: Corresponding author.
## 1 Introduction
This study analyses the accessibility to urban facilities in Malayer (a city in the north-west part of Iran). Planning is carried out within the brand framework of government policy making and has its general objectives set out in legislation urban planning and are concerned with the management of urban change ([PERSON] and [PERSON], 2004; [PERSON] and [PERSON], 2002; [PERSON] and [PERSON], 2000). Also the desire to improve the quality of life (QOL) in a particular place is an important focus of attention for planners ([PERSON], 2002; [PERSON] and [PERSON], 2007). The enterprise of planning as a public activity is strongly motivated and justified in terms of its potential contributions to citizen QOL ([PERSON] and [PERSON], 2009b; [PERSON], 2002).
The location of most public spaces from a policy and planning perspective is determined by the spatial distribution of public services and facilities. Moreover, this is the area where social can be mitigated or at least offset by compensatory distribution ([PERSON] and [PERSON], 1998; [PERSON] et al., 2003). On the other hand, equity and efficiency considerations and local body conventions are also considered for determination of public space locations. Urban public spaces influence the quality of life and the welfare people both directly and indirectly implicit in the public provision of amenities such as parks, recreational facilities and social and cultural services in a belief that they are beneficial to wellbeing of residents ([PERSON], 2004; [PERSON] et al., 2003).
Accessibility is in turn an important factor which impacts all aspects of public space in both direct and indirect ways ([PERSON], 2004). Public facilities can be linked to accessibility and thus residential proximity to facilities can be used as contributing to health and wellbeing in a number of ways. In addition to easier and more direct access to public places, it confers opportunities by reducing the time and financial costs of access which in turn frees individual and house hold resources for use elsewhere ([PERSON] and [PERSON], 2009b; [PERSON] et al., 2006).
## 2 Literature Review
There are many studies of public space, from neighbourhood units ([PERSON] et al., 2007; [PERSON] et al., 2008a; [PERSON] and [PERSON], 2007; [PERSON] et al., 2008; [PERSON] et al., 2011) to the national level ([PERSON] et al., 2007; [PERSON] et al., 2008b; [PERSON] et al., 2011) which include a broad range of public spaces including access to green space ([PERSON] et al., 2010; [PERSON] et al., 2006; [PERSON] and [PERSON], 2011), access to health services ([PERSON] and [PERSON], 2009; [PERSON] et al., 2006), and access to open spaces ([PERSON] et al., 2010; [PERSON] et al., 2008).
Since 1993, the ease of implementing and using GIS has improved significantly. Great advances have been made in both the number and power of capabilities provided as standard functions in GIS packages, and the amount of easily available data, much of it downloadable over the Internet, has increased. These improvements have enabled the development of more sophisticated analytical applications in accessibility field ([PERSON], 2001). Over the past decade, geographical information systems (GIS) technology has been used by researchers for accessibility analysis ([PERSON] et al., 2000).
The four measures most commonly used in accessibility studies are gravity potential, average distance between each origin and all facilities, and minimum distance (the distance from an origin to the nearest facility). Four types of distance can be used to calculate these four measures of accessibility: Euclidian distance (straight line), Manhattan distance (distance along two sides of a right-angled triangle, the base of which is the Euclidian distance), shortest network distance and shortest network time ([PERSON] and [PERSON], 2004).
In 2004 [PERSON] and [PERSON] have developed an integrated GIS approaches to accessibility analysis, which provides a general framework for using GIS and travel impedance. [PERSON] (2001) measured accessibility to parks by using 'radius' and 'network analysis' methods. [PERSON] et al., (2006) used kernel density method to measure spatial accessibility to health services. Short network distance method has been used by [PERSON], et al. (2007) in order to measure the accessibility of facilities for public housing in Montreal. [PERSON] et al., (2008) used GIS-based network analysis to determine urban green space accessibility in Leicester, UK. In this paper we looked in Malayer city, Hamedan province which has both version of urbanization and the availability of proper data for our accessibility analysis. The method which has been used to measure the accessibility is based on GIS-Based multiciteria decision analysis model called as fuzzy majority approach which considers being very competitive model in terms of accuracy and speed for classification problems.
## 3 Background
### Measuring accessibility
Accessibility is a frequently used concept but there is no consensus about its definition. It is common term experienced by diverse individuals (i.e. characterized by different needs, abilities and opportunities) at any place and moment of the day which results in considerable variation in components included in its measurement and in how it is formulated ([PERSON] and [PERSON], 2009b).
Accessibility has been defined according to purpose of the research, but it has commonly been defined as some measure of spatial separation of human activities ([PERSON] and [PERSON], 1998) or as the case a certain system of transport ([PERSON] et al., 2009; [PERSON] and [PERSON], 2002; [PERSON] and [PERSON], 2009a). Accessibility refers to the case where a building place or facilities can be reached by people and/or goods and services ([PERSON] and [PERSON], 2009b).
Accessibility can be measured in many ways which are container (e.g. the number of green spaces in each neighborhood until), Coverage (e.g. the number of kindergartens in 800m from residential), Minimum Distance (e.g. the distance from neighborhood units centre to the nearest park), and Service Area (e.g. all areas within 800m from kindergartens) ([PERSON] and [PERSON], 2009b). Another methodological issue is calculation of per-capita. Wilset measuring per-capita for land uses has received considerable attention on the US, UK and Canada and it has been calculated from the division ratio of area and population ([PERSON], 1995).
The use of public facilities can be linked to accessibility, and thus residential proximity to facilities and services can be theorized as contributing to health and wellbeing in a number of ways. In addition, it confers opportunities by reducing the time and financial costs of access ([PERSON] et al., 2006).
In all of studies in the field of public spaces, the authors have addressed different methods for analysing the accessibility to public spaces ([PERSON] et al., 2000; [PERSON] et al., 2006) or presenting a new method for measuring accessibility with respect to the previous methods ([PERSON] et al., 2007; [PERSON] et al., 2008b; [PERSON] et al., 2003). Almost all of the presented models in the related literature are based on the traditional Boolean logic, which is crisp, deterministic, and precise in nature and gives no room for human decision making processes. Sharp boundaries are imposed to create categories in the thematic attribute and a spatial entity can either belong to or not belong to a set. However, the traditional cartographic modeling technique has proved to be quite awkward in some GIS applications where imprecision and vagueness prevails, because not all the entities in the spatial database can be uniquely defined, either in the set of attributes or in their spatial delineation ([PERSON], 1989).
As an alternative to Boolean logic, [PERSON]'s fuzzy set theory has been proposed as the new logical foundation for GIS design ([PERSON], 1988). The potential applications of fuzzy logic in spatial data collection, representation, retrieval and display have been discussed in literature ([PERSON] and [PERSON], 2010). A fuzzy information representation scheme and its implementation in conventional GIS software were developed at the University of Guelph ([PERSON], 1990) reported a formal fuzzy logic-based specification framework for geographic information, in which rules of reasoning about time, space and accuracy have been stated in a subset of second-order calculus ([PERSON] and [PERSON], 2010).
### GIS-Based MultiCriteria Decision Analysis (GIS-MCDA)
GIS-MCDA is defined as a process that transforms and combines geographical data to obtain appropriate information for decision making. GIS-MCDA is in the context of the capabilities of GIS and MCDA which can observe the benefits for advancing theoretical and applied research on the integration of MCDA and GIS ([PERSON] and [PERSON], 2010; [PERSON] and [PERSON], 1999; [PERSON], 2004).
The most important advantage of GIS-MCDA methods is its capability to handle different views on the identification of the elements of a complex decision problem, hierarchical structure organization, and study the relationships among components of the problem. GIS-MCDA for group decision-making takes the format of the individual judgments into a group preference in a way whereby the best compromise (the preferred alternative) can be identified ([PERSON] and [PERSON], 2010; [PERSON], 2006a, b). Based on [PERSON], 2006, the voting methods (social choice functions) are the most popular approach for a group decision making solution in a GIS-based multiciteria group decision-making (see [PERSON] and [PERSON], 2010; [PERSON], 2006a for more detail). [PERSON] and [PERSON] (2006) proposed a fuzzy majority approach to model the concept of majority opinion in group decision-making problems. The fuzzy majority concept generates a group solution by using a linguistic quantifier, that corresponds to the majority of the decision-makers' preferences. The approach addresses the above mentioned difficulties encountered by the voting schemes in relation to the combination process ([PERSON] and [PERSON], 2010; [PERSON] and [PERSON], 2006).
Under the mentioned circumstances, GIS-MCDA involves problem definition and structuring; selection of the evaluation criteria; criterion weighting (the procedure for entering the preferences on the importance of the criteria by each individual decision-maker); determination of individual preferences; combination of the individual judgments into a single collective preference; sensitivity analysis with respect to the set of evaluation criteria and alternatives; and final ordering of alternatives so that a compromise alternative can be selected. For more information please see ([PERSON] and [PERSON], 2010; [PERSON] et al., 1997; [PERSON] and [PERSON], 2000; [PERSON], 2006b).
## 4 Methodology
The outline research method used to develop the mapping model including the field work, design and data analyses. The primary steps towards any research work are the data used and the methods employed. The research's questions will determine whether the secondary data sources can suffice or primary data collection for the parameters used in the study is required. The methodology of this case study contains 3 phases which are: data collection, data preparation, and data analysis.
### Data Collection
The whole Malayer City is divided into 52 blocks. Out of these 52 blocks, 25 blocks contain the residential part of the city. In this research 7 layers (which have been georeferenced into the UTM projection with the WGS84 as datum) have been used. These layers are containing information about location of Kindergarten, Primary School, Secondary School, High School, Hygienic, Administration and Sport Facilities. The locations of these facility layers in the data have been represented by polygon geometry. GPS technique has been used for collecting the accurate attributes and after that by using ArcGIS 9.2 the updated attributes have been attached to the concerned layers.
### Data Preparation
Data preparation phase contains four steps:
1. Creating the road network for network analysis
2. Road Network
3. Creating the hexagonal point layer as incidents
4. Creating Point Feature Class as facilities (e.g. Fig. 1)
5. Extracting the closest distance from the center of hexagon to each attribute layer (e.g. Fig. 2)
6. Modeling & Automation (Fig. 3)
7. Merging the tables by SPSS
8. Merging the attributes & create the final hexagonal layer
9. Software Customization for automation of the process.
In this research, in order to automatize the extracting process of.dbf of the closest distances, a model has been created. In the model, \"Make Closest Facility\" tools has been used for calculating the closest facility distance for each hexagon centre, then by \"Select Data\" tools, route layer from network analyst has been chosen. In the last step by \"Table Select\" tools.dbf file has been extracted. After extracting the.dbf file for each layer, they should be merged in order to have a final.dbf for the next step. The final result after merging is a layer which contains the closest distance from each hexagon's centre to facilities and it will be the layers that are used in the analysing phase.
The analysis phase is concerned with developing a framework for GIS-based multiciteriteria group decision-making using the fuzzy majority approach (For more information about the model please see ([PERSON] and [PERSON], 2010; [PERSON], 2013)). The procedure for solving a spatial group decision-making problem involves two stages. First, each decision-maker solves the problem individually. Second, the individual solutions are aggregated to obtain a group solution.
The first stage is operationalized by a linguistic quantifier-guided ordered weighted averaging (OWA) procedure to create individual decision-maker's solution maps. Then the individual maps are combined using the fuzzy majority procedure to generate the group solution map which synthesizes the majority of the decision-makers' preferences.
Figure 1: The centers of administration on the hexagonal layer of Malayer city (updated 2009)
## 6 Discussion and Conclusion
In this paper an attempt has been made to present a methodology to calculate the land evaluation base on distance for reaching activity places. A series of \"subjective\" measures of accessibility based on distances made by road network was built for Malayer City. Furthermore, the distribution of distances was summarized using fuzzy logic in order to qualify each type of layers and for each Hexagon, the suitability of every service point located in the GIS and to build perceptual accessibility indices.
The study presented the fuzzy majority approach using OWA procedure for GIS-based multi-criteria decision-making and its implementation in the ArcGIS environment. Fuzzy logic facilitates the challenges of converting human language into the mathematical formulation, which in turn paves the way for fuzzy weighting methods, quantifier-guided OWA and fuzzy majority procedures. Without doubt there are disadvantages in using Boolean logic, especially it is not possible to be precise regarding the role played by specific land properties and there are errors in the data as a result of spatial variability. In this study it has been tried to demonstrate the advantage of fuzzy method in multicriteria decision making.
The results can be used to inform the local planning process and the GIS approach can be expanded into other local authority domains. The results shows that the distribution of facilities in Malayer city has followed on the base of cost benefit law and the human aspect of resource allocation programming of facilities (from centre to suburbs of the city).
## References
* [PERSON] and [PERSON] (2004) [PERSON], [PERSON] [PERSON], 2004. Objectives, methods and results of surveys carried out in the field of urban freight transport: an international comparison. Transport Reviews 24, 57-77.
* [PERSON] et al. (2007) [PERSON], [PERSON], [PERSON], 2007. The case of Montreal's missing food deserts: evaluation of accessibility to food supermarkets. International journal of health geographics 6, 4.
* [PERSON] and [PERSON] (2010) [PERSON], [PERSON] [PERSON], 2010. Using the fuzzy majority approach for GIS-based multicriteria group decision-making. Computers & Geosciences 36, 302-312.
* [PERSON] et al. (2010) [PERSON] [PERSON], [PERSON], [PERSON], 2010. The relationship of physical activity and overweight to objectively measured green space accessibility and use. Social science & medicine 70, 816-822.
* [PERSON] and [PERSON] (2002) [PERSON], [PERSON], 2002. Town and Country Planning in the UK. Psychology Press.
* [PERSON] (1995) [PERSON], 1995. An Introduction on Urban Planning in Iran. Science and industry Publication 172.
* [PERSON] and [PERSON] (1999) [PERSON], [PERSON], 1999. Consensus-building in a multi-participant spatial decision support system. URISA journal 11, 17-23.
* [PERSON] et al. (2000) [PERSON], [PERSON], [PERSON], 2000. Comparing alternative methods of measuring geographic access to health services. Health Services and Outcomes Research Methodology 1, 173-184.
* [PERSON] et al. (2009) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2009. The dictionary of human geography. John Wiley & Sons.
* [PERSON] (2004) [PERSON], 2004. Spatial accessibility of primary care: concepts, methods and challenges. International journal of health geographics 3, 3.
* [PERSON] et al. (2006) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2006. The relationship between access and quality of urban green space with population physical activity. Public health 120, 1127-1132.
* [PERSON] et al. (1997) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 1997. Spatial group choice: a SDSS tool for collaborative spatial decisionmaking. International Journal of Geographical Information Science 11, 577-602.
* [PERSON] and [PERSON] (2002) [PERSON], [PERSON], 2002. From medical to health geography: novelty, place and theory after a decade of change. Progress in Human Geography 26, 605-625.
* [PERSON] (2004) [PERSON], 2004. Of intractable conflicts and participatory GIS applications: The search for consensus amidst competing claims and institutional demands. Annals of the Association of American Geographers 94, 37-57.
* [PERSON] and [PERSON] (2011) [PERSON], [PERSON], 2011. The health benefits of urban green spaces: a review of the evidence. Journal of Public Health 33, 212-222.
* [PERSON] (1989) [PERSON], 1989. Fuzzy Logic And Knowledge-based Gis. A Prospectus, Geoscience and Remote Sensing Symposium, 1989.
Figure 4: The map of relative accessibility base on administration, sport and hygienic facilities distancesIGARSS'89. 12 th Canadian Symposium on Remote Sensing. 1989 International. IEEE, pp. 47-50.
* [PERSON] and [PERSON] (2000) [PERSON], [PERSON], [PERSON], 2000. Providing decisional guidance for multicriteria decision making in groups. Information Systems Research 11, 386-401.
* [PERSON] and [PERSON] (2009a) [PERSON], [PERSON], [PERSON], 2009a. Analyzing accessibility dimension of urban quality of life: Where urban designers face duality between subjective and objective reading of place. Social indicators research 94, 417-435.
* [PERSON] and [PERSON] (2009b) [PERSON], [PERSON], [PERSON], 2009b. Measuring objective accessibility to neighborhood facilities in the city (A case study: Zone 6 in Tehran, Iran). Cities 26, 133-140.
* [PERSON] (2006a) [PERSON], 2006a. GIS-based multicriteria decision analysis: a survey of the literature. International Journal of Geographical Information Science 20, 703-726.
* [PERSON] (2006b) [PERSON], [PERSON], 2006b. Ordered weighted averaging with fuzzy quantifiers: GIS-based multicriteria evaluation for land-use suitability analysis. International Journal of Applied Earth Observation and Geoinformation 8, 270-277.
* [PERSON] (2002) [PERSON], 2002. Quality of life: public planning and private living. Progress in Planning 58, 141-227.
* [PERSON] and [PERSON] (2009) [PERSON], [PERSON] [PERSON], 2009. Measuring spatial accessibility to primary care in rural areas: improving the effectiveness of the two-step floating catchment area method. Applied Geography 29, 533-541.
* [PERSON] (2001) [PERSON], 2001. Measuring the accessibility and equity of public parks: a case study using GIS. Managing Leisure 6, 201-219.
* [PERSON] et al. (2000) [PERSON], [PERSON], [PERSON], 2000. Using desktop GIS for the investigation of accessibility by public transport: an isochrone approach. International Journal of Geographical Information Science 14, 85-104.
* [PERSON] and [PERSON] (2006) [PERSON], [PERSON], 2006. Modeling the concept of majority opinion in group decision making. Information Sciences 176, 390-414.
* [PERSON] et al. (2007) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2007. Neighborhood deprivation and access to fast-food retailing: a national study. American journal of preventive medicine 32, 375-382.
* [PERSON] et al. (2008a) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2008a. Neighbourhood provision of food and alcohol retailing and social deprivation in urban New Zealand. Urban Policy and Research 26, 213-227.
* [PERSON] et al. (2006) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2006. Neighbourhoods and health: a GIS approach to measuring community resource accessibility. Journal of epidemiology and community health 60, 389-395.
* [PERSON] et al. (2008b) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], 2008b. Regional and urban-rural variations in the association of neighbourhood deprivation with community resource access: a national study. Environment and planning. A 40, 2469.
* [PERSON] and [PERSON] (2000) [PERSON], [PERSON], 2000. Governmentality and rights and responsibilities in urban policy. Environment and Planning A 32, 2187-2204.
* [PERSON] (1988) [PERSON], 1988. Some implications of fuzzy set theory applied to geographic databases. Computers, Environment and Urban Systems 12, 89-97.
* [PERSON] (1990) [PERSON], 1990. Formal specification of geographic data processing requirements. Knowledge and Data Engineering, IEEE Transactions on 2, 370-380.
* [PERSON] and [PERSON] (2007) [PERSON], [PERSON], 2007. Monitoring urban quality of life: The Porto experience. Social Indicators Research 80, 411-425.
* [PERSON] et al. (2010) [PERSON] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2010. Associations between recreational walking and attractiveness, size, and proximity of neighborhood open spaces. American Journal of Public Health 100.
* [PERSON] and [PERSON] (1998) [PERSON], [PERSON], 1998. Assessing spatial equity: an evaluation of measures of accessibility to public playgrounds. Environment and Planning a 30, 595-613.
* [PERSON] (2013) [PERSON], 2013. GIS-based multicriteria decision analysis for land evaluation:case study. LAP LAMBERT Academic.
* [PERSON] et al. (2003) [PERSON], [PERSON], [PERSON] [PERSON], 2003. The quality of urban environments: Mapping variation in access to community resources. Urban Studies 40, 161-177.
* [PERSON] et al. (2008) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2008. Neighbourhood access to open spaces and the physical activity of residents: a national study. Preventive medicine 47, 299-303.
* [PERSON] et al. (2011) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2011. Neighbourhood Destination Accessibility Index: a GIS tool for measuring infrastructure support for neighbourhood physical activity. Environment and Planning-Part A 43, 205.
* [PERSON] et al. (2006) [PERSON], [PERSON], [PERSON], [PERSON], 2006. Comparing GIS-based methods of measuring spatial accessibility to health services. Journal of Medical Systems 30, 23-32.
|
isprs
|
MEASURING LAND USES ACCESSIBILITY BY USING FUZZY MAJORITY GIS-BASED MULTICRITERIA DECISION ANALYSIS CASE STUDY: MALAYER CITY
|
A. Taravat, A. Yari, M. Rajaei, R. Mousavian
|
https://doi.org/10.5194/isprsarchives-xl-2-w3-255-2014
| 2,014
|
CC-BY
|
isprs/6e1753de_b1d2_4cff_b34c_ab7fd9089fe1.md
|
ing orientation and measurement protocols ([PERSON] et al., 2006). However our experience has showed that dental morphology is described better by methods beyond Euclidian geometric constructions on planes ([PERSON] et al., 2019).
It should be mentioned that methods of imaging have a significant impact on potential of techniques which can be applied in odontological research ([PERSON] and [PERSON], 2008, [PERSON] and [PERSON], 2016, [PERSON] et al., 2021b). Methods which allow visualisation of inner and outer dental morphology are of importance, if compared, for instance, with optical scanning techniques. And today in palaeontology, palaeontrophology and other disciplines x-ray micro-focus computed tomography satisfies majority study requests ([PERSON], 2004, [PERSON] et al., 2019, [PERSON] et al., 2019). Only more profound studies of enamel micro-structure require application of synchrotron micro-tomography ([PERSON] et al., 2018), and in imaging of fossilised teeth neutron scanners can be used more effectively ([PERSON] et al., 2020); combined use of different scanning techniques is possible as well ([PERSON] et al., 2019). Multi-layer 3D reconstructions, which can be obtained through application of the above-mentioned imaging techniques, have truly boosted up advanced research, especially in recent two decades. Such studies as morphometric analysis, enamel thickness measurements, digital odometry and other directions in odontology are in the list of methods which can provide new data for understanding dental morphology ([PERSON] et al., 2017). New imaging techniques, in line with expanding the scope of studies, make higher accuracy in measurements or landmark settings more important and, at the same time - feasible. In this connexion, new approaches to procedures of teeth orientation, as an integral part of measurement technique, are proposed in this paper. They are discussed from positions of applicability in odontological research and their relevance to dental morphology.
Conventional 3D measurement techniques usually rely on orientation based on geometric constructions of planes according to enamel cervical margin; and it should be noted that there is a number of varieties of the mentioned tactics ([PERSON] et al., 2014). However, the cervical part of enamel cap is not always a clearly detected structure due to its frequent chipping on palaeometrical, and in addition it is not studied enough in terms of its stability. For this reasons search for new methodological approaches to orientation continues in odontological research, and in many respects high-resolution tomographic imaging is one of the main contributing factors in this process. Thus new morphopholes of physical structures, such as adenine from tips, have been introduces for plane constructions and subsequent orientation and measurements ([PERSON] et al., 2013).
However the horn tips are frequently subjected to dental wear immediately after complete loss of enamel on cusp tips. Thus we propose different tactics, which are largely based on our experience in implementing automated measurement (including orientation) algorithms. The mentioned approaches use other structures (or to be more precise, their borders), which are morphologically and functionally relevant and thus important in developing new methods of measurements. Here we put forward primarily the occlusal contours on enamel and dentine surfaces, which represent curved closed lines ([PERSON] et al., 2021a). Even though these structures have been used for orientation purposes, the widely-used approaches are usually based on visual control and plane construction protocols ([PERSON] et al., 2010). In our works setting these contours is based on surface curvature analysis and subsequent setting of orientation axis direction. In addition enamel cervical margin has been also used in our previously conducted studies for orientation ([PERSON] et al., 2022). Hence we propose a technique which provides orientation of the studied teeth (their tomographically obtained 3D reconstructions) using more than one morphological structure contour at a time for orientation purposes. This means that results of contour setting on three possible morphological structures: enamel occlusal contour, dentine occlusal contour and enamel cervical margin (Figure 3), can be paired in order to have an accurate and appropriate, in terms of study objectives, orientation.
## 2 Material and Methods
The teeth, which have been used for testing the proposed orientation technique, were picked from a series of mandible fragments and semi-mandibles found in the process of archaeological excavations on territories of Russian Federation and Republic of Armenia (Figure 1). These findings have been classified as related to the Early and Middle Bronze Ages and they are planned to be studied in a wider sample in the near future.
Figure 1: Semi-mandibles with preserved parts of dental arches: Fatyanovo (a), and Shengavit (b).
Thus findings which relate to Fatyanovo culture are from upper Volga region and has similarities with Corded Ware culture, which had been spread over the territories of Central, Northern and Eastern Europe. The other set of samples is from the settlement of Shengavit which is now situated within city limits of Yerevan - capital of the Republic of Armenia. The archaeology of Shengavit has typical traits of Kura-Araxes culture of the Early Bronze Age. It is worth mentioning in addition that recent studies have found common genetic features among the ancient inhabitants of the Great Steppe and Armenia of the aforementioned cultures ([PERSON] et al., 2022). The material in this study represents digitally segmented teeth. These teeth are the first molar from mandible fragment of representative of Fatyanovo culture (Figure 0(a)). The other semi-mandible is from settlement of Shengavit (Figure 0(b)), and the second molar was taken from this sample.
The semi-mandibles were placed in Phoenix v[ome]x m (General Electric) and scanned at 240 \(kV\) voltage and 400 \(\mu A\) current, exposition - 250 ms. 3D reconstructions of the semi-mandibles were obtained with voxel isometric side of 0.0978 mm. These models were used for choosing the mentioned above teeth according to their relatively well-preserved morphology, providing necessary conditions for testing proposed approaches to orientation. For reconstruction of 3D models of the teeth, their slices were segmented from the entire semi-mandibles' stacks, and then separate reconstructions of enamel and dentine morpho-histological layers were obtained. (Figure 2 a). This allowed to have an immediate access to the morphological structures of the teeth and to define their borders, which were used for orientation.
The models' orientations are performed in the current study according to three contours, which have found in our previously conducted studies by means of automated algorithms. However these teeth, even being one of the best preserved in the current series of samples, due to cracks and worn out surfaces, required some manual corrections of the contours. The three mentioned contours are enamel cervical margin, occlusal surface contour on enamel and occlusal surface contour on dentine (Figure 3), and their clearly defined borders serve for further stages of the orientation. Being initially defined on dentine 3D reconstruction, the contour of occlusal surface on dentine was transferred to inner surface of enamel reconstruction (Figure 2(c)).
We used a pair of contours for each orientation performed according to the \"centre of masses\" of their points' coordinates. More specifically the sum on coordinates for all points on each contour was calculated. The vertical axis could be set according to two \"centres of masses\". Two alternative orientations were performed. In both proposed orientations for each tooth the contour of enamel cervical margin remained constant serving a starting point for setting orientation axis in vertical direction. The second point for accurate axis positioning was set according to one of the two occlusal surface contours: enamel or dentine.
## 3 Results and Discussion
Orientation protocols continue to play an important part in measurements techniques and, even though previously tested approaches to orientation of x-axis on optically scanned teeth, have showed that changes in inclination do influence metrics. And today increasing in number and variety of applied methods studies rely on data obtained by measurements. Thus accuracy starts to play one of the important parts, and many methodological improvements are aimed to achieve its higher levels. To increase accuracy of vertical axis orientation of tooth 3D reconstruction we propose tactics of using two points for setting its direction. Both these points are centre of masses of contours' point which delineate a pair of dental morphological structures. We will focus below on these structures in order to clarify the morphological relevance of the method.
The occlusal surface and its contour are one of the most characteristic and important structures for upper and lower posterior teeth: premolars and molars, in the whole range of their variety. Occusal surface plays an integrative role in terms of uniting morphological structures within a tooth (e.g., cusps) and functional interaction on opposing teeth. Defining the borders of this structure is based on surface curvature analysis and can be performed, depending on surface features and condition of the studied teeth, in manual or automated modes.
It is important to mention that tomographic scanning techniques give access to internal structures of teeth, and it is interesting that morphology of enamel and dentine surfaces has many common traits, especially since enamel deposition starts from the area of
Figure 2: Enamel (a) and dentine (b) reconstructions of lower left first molar from Fatyanovo.
e-defined and the same one-to-enamel-dentine junction. Thus the occlusal surface contour can be delineated on dentine as well, and this structure also serves for setting the \"centre of masses\" for further orientation. The third circular structure is the cervical enamel edge - the place where outer and inner surfaces of enamel merge. It can be clearly defined by means of tomographic scanning technique as segmented enamel cap 3D reconstruction contains that border which has the critical values of surface curvature. It should be also emphasised that no additional constructions (such as plains) are required for correct orientation.
As we have mentioned before accurate vertical axis direction setting requires a pair of \"centre of masses\" points, and these pairs in the proposed methodological improvement are: a) enamel occlusal surface - cervical enamel edge, and b) dentine occlusal surface - cervical enamel edge (Figure 4). Thus as the \"stabile\" point the centre of masses of the cervical enamel edge was picked - common for both approaches. This choice was also defined by accuracy requirements, as points distant from each other would give a better result (if compared with another pair: enamel occlusal surface - dentine occlusal surface, which has not been tested now, nevertheless might be useful in studies of enamel cervical margin).
Nevertheless the mentioned and presented above two pairs give an essential methodological application potential for the proposed orientation technique, which is certainly more significant aspect than inter-point distance. The enamel occlusal surface is the most functionally important part of tooth crown as it interacts with the opposing tooth providing function - chewing. Therefore it is the most rapidly changing surface on the tooth, especially if compared to the surface of dentine, which is involved in the process of dental wear later. It can also be included in the list of factors influencing functional loading of tissues surrounding teeth: periodontium and bone (maximally and mandibular). Consequently we get to track functional changes which take place on the enamel surface taking dentine as a base surface for comparisons by means of the orientation technique proposed and starting from the orientation stage in measurement technique. Only the task of obtaining a series of images with time intervals should be resolved, which requires application of imaging technique combining sufficient resolution and low exposure rate for in vivo studies.
Such applications can be also used in comparisons of teeth found in different conditions, providing for accumulation of new type of data, which can shed light on new dental morphological features. We should also mention that choosing other pairs of dental morphological structures would allow to study other surfaces and structures (e.g. enamel cervical margin).
Figure 4: Two coordinate systems set on a tooth.
Figure 3: Contours of occlusal surface contour on enamel (a), enamel cervical margin (b) and occlusal surface contour on dentine (c) shown on enamel model).
We should mention that the studied material is not of a wide sample, and the article presents only methodological development of orientation technique without measurements conducted. But similar material of more semi-mandables has been scanned in this series (such archaeological sites as Tli, Kokma and others are included). We are planning to segment 3D reconstructions of teeth from these samples for more detailed studies in the near future.
## 4 Conclusions
Combination of imaging technique's application with 3D surface analysis allows to improve methodological aspects of metric studies of teeth. The current article presents orientation tactics, which are morphologically relevant and accurate. The proposed method is suitable for measurements of teeth and studying their morphological and functional changes.
## References
* [1][PERSON]. [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON]-[PERSON] Technical Note: Guidelines for the Digital Computation of 2D and 3D Enamel Thickness in Hominoid Teeth. American journal of physical anthropology153 (10), pp. 1002/ajpa.22421 Cited by: SS1.
* [2][PERSON] et al. (2015) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] A new species of Homo from the Dinadel Chamber, South Africa. eLIFE4:e09560. DOI: 10.7554/eLife.e9560 Cited by: SS1.
* [3][PERSON] et al. (2010) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] Enamel thickness in Asian human canines and premolars. Anthropological Science, v.118, pp. 191-198. External Links: Document, 10.1537/ase.091006 Cited by: SS1.
* [4][PERSON] and [PERSON] (2008) [PERSON], [PERSON], [PERSON], [PERSON] Accounting for measurement error: a critical but often overlooked process. Arc-hives of oral biology54 Suppl1 (1), pp. S107-17. External Links: Document, 10.1016/j.archorlabio.2008.04.010 Cited by: SS1.
* [5][PERSON] and [PERSON] (2015) IFRI and 3D. [PERSON] (2016) A companion to dental anthropology. John Wiley & Sons. External Links: Link Cited by: SS1.
* [6][PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] (2021) Orientation vs orientation: image processing for studies of dental morphology. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-B2-2021, XXIV ISPRS Congress (2021 edition), pp. 723-728. External Links: Document, 2101.05194 Cited by: SS1.
* [7][PERSON] et al. (2022) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] (2021) Orientation vs orientation: image processing for studies of dental morphology. Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XLII-B2-2021, XXIV ISPRS Congress (2021 edition), pp. 723-728. External Links: Document, 2101.05194 Cited by: SS1.
* [8][PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] (2022) A survey of the 3D reconstruction of the 3D* [2010] [PERSON] et al., 2010. [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] (2010). Brief Communication: Contributions of Enamel-Denting Junction Shape and Enamel Deposition to Primate Molar Crown Complexity. American journal of physical anthropology. 142. 157-63. 10.1002/ajpa.21248
* [2006] [PERSON] et al., 2006. [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] (2006). Modern human molar enamel thickness and enamel-dentin junction shape. Archives of oral biology. 51. 974-95. 10.1016/j.archorlabio.2006.04.012
* [2018] [PERSON] et al., 2018. [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] Disentangling isolated dental remains of Asian Pleistocene hommins and pongenings. PLOS ONE. 13. e0204737. 10.1371/journal.pone.0204737
* [2005] [PERSON] and [PERSON], 2005. [PERSON], [PERSON], [PERSON], [PERSON] A micro-CT based study of linear enamel thickness in the mesial cusp section of human molars: Reevaluation of methodology and assessment of within-tooth, serial, and individual variation. Anthropological Science. 113. 273-289. 10.1537/ase.050118
* [2017] [PERSON] et al., 2017. [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] An early modern human presence in Sumatra 73,000-63,000 years ago. Nature. 548. 10.1038/nature23452.
* [2019] [PERSON] et al., 2019. [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] First systematic assessment of dental growth and development in an archac homicin (genus, Honno) from East Asia. Science Advances. 5. 10.1126/sicad.aau0903
* [2020] [PERSON] et al., 2020. [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] When X-Rays Do Not Work. Characterizing the Internal Structure of Fossil Hominid Denotgnathic Remains Using High-Resolution Neutron Microtomographic Imaging. From. Ecol. Evol. Vol. 8, 2020. [[https://doi.org/10.3389/fevo.2020.00042](https://doi.org/10.3389/fevo.2020.00042)]([https://doi.org/10.3389/fevo.2020.00042](https://doi.org/10.3389/fevo.2020.00042))
* [2006] [PERSON], 2006. [PERSON], 2006. Methodological Handbook for Anthropological Analysis of Odontological Materials. ETNO-ONLINE, Moscow
|
isprs
|
APPROACHES TO ORIENTATION OF HIGH RESOLUTION TOMOGRAPHIC 3D RECONSTRUCTION OF TEETH IN METRIC STUDIES
|
A. V. Gaboutchian, V. A. Knyaz, H. Y. Simonyan, S. V. Vasilyev, D. V. Korost, N. V. Stepanov, G. R. Petrosyan, A. V. Emelyanov
|
https://doi.org/10.5194/isprs-archives-xlviii-2-w3-2023-59-2023
| 2,023
|
CC-BY
|
isprs/c289a4ac_4f85_4ca6_9ad4_e45dd6f213d3.md
|
Improved Indoor Positioning using BLE Differential Distance Correction and Pedestrian Dead Reckoning
[PERSON]
1 Department of Geomatics, National Cheng Kung University, No. 1, University Road, Tainan, Taiwan - (sugar.14387, yuki31210, cacalult690, kwchiang72)@gmail.com
[PERSON]
1 Department of Geomatics, National Cheng Kung University, No. 1, University Road, Tainan, Taiwan - (sugar.14387, yuki31210, cacalult690, kwchiang72)@gmail.com
[PERSON]
1 Department of Geomatics, National Cheng Kung University, No. 1, University Road, Tainan, Taiwan - (sugar.14387, yuki31210, cacalult690, kwchiang72)@gmail.com
[PERSON]
Corresponding author1 Department of Geomatics, National Cheng Kung University, No. 1, University Road, Tainan, Taiwan - (sugar.14387, yuki31210, cacalult690, kwchiang72)@gmail.com
###### Abstract
Recently, indoor positioning becomes a popular issue because of its corresponding location-aware applications. Owing to the limits of the sheltered signal of satellites in indoor environments, one of the alternative scheme is Bluetooth Low Energy (BLE) technology. BLE device broadcasts Received Signal Strength Indicator (RSSI) for distance estimation and further positioning. However, in the complex indoor environment, the reflection, fading, and multipath effect of BLE make the variable RSSI and may lead to poor quality of RSSI. In this study, the concept called Differential Distance Correction (DDC) is similar to the Differential Global Navigation Satellite System (DGNSS). This method can eliminate some residuals and further improve the results with the corrected distance. On the other hand, Pedestrian Dead Reckoning (PDR) is another common indoor positioning method. PDR can propagate the next position from the current position by the implemented of inertial sensors. Despite that, the error of inertial sensors would accumulate with time and walking distance, which position update is required for restraining the drift. Accordingly, the two indoor positioning methods have their strong and weak point. BLE-based positioning is absolute positioning, while PDR is relative positioning. This study proposes a concept that combines the two methods. The pedestrian receives the RSSI and records the information from inertial sensors simultaneously. Through the complementary of two methods, the positioning results would be improved from 29% to 66% according to different travelled distance.
## 1 Introduction
With new technological advances, people are more and more dependent on the conventional technology in the positioning system. Nowadays, the Global Navigation Satellite System (GNSS) is already a part of daily life, while the signals are blocked in the indoor environment. Additionally, indoor positioning can apply various application i.e. product tracking, indoor navigation or smart city. Although people spend about 90% of their time indoors ([PERSON], 2018), applications indoors still have difficulty to achieve the same level of positioning accuracy, continuity and reliability as outdoors ([PERSON] et al., 2017). Hence, the indoor positioning has gained popularity in recent years. Many researchers are devoted to developing various indoor positioning approaches.
A kind of technique in indoor positioning is using wireless signals to replace GNSS signals, such as Wi-fi, Infrared (IR), Radio Frequency Identification (RFID), and Ultra-wideband (UWB). Among them, Bluetooth Low Energy (BLE), version 4.0 of Bluetooth, is the most feasible choice on account of cost, power consumption, deployment, distance of transmission, etc. BLE device, beacon, is the transmitter that can broadcast Received Signal Strength Indicator (RSSI). RSSI can be converted to distance, and the distance is the main material in the positioning approach. That is, the RSSI is vital to the result of positioning. With RSSI, three kinds of approaches can be adopted: trilateration, fingerprinting, and proximity detection ([PERSON] et al., 2015). The main challenge to BLE-based indoor positioning is to reduce the effect of the environment changes including reflection, fading, and multipath effect ([PERSON] et al., 2016). Hence, the concept called differential distance correction (DDC) is introduced to eliminate these effects. The fingerprints should be trained previously, which is not suitable for applying DDC. As for proximity detection, the algorithm selects the location information of beacon broadcasting the highest RSSI, which is also not applicable for DDC. Consequently, the DDC strategy is based on trilateration method, which is similar to the Differential Global Navigation Satellite System (DGNSS).
Besides the wireless signals, another technique to positioning in the indoor environment is Pedestrian Dead Reckoning (PDR). PDR is an approach based on the low-cost sensors embedded in the smartphone. It is independent of the environmental factor and there is no need to deploy transmitters in the field. The error accumulated with time and walking distance is the critical defect of PDR.
The two indoor positioning methods have their strong and weak point. BLE-based positioning is an absolute positioning, which obtains higher positions in the long term and is time-independent. However, the variation of the signal is a significant defect. As for PDR, it is a relative positioning, of which error accumulates over time, but it's self-contained because of no signal interfered. Hence, this paper proposes a method to combine the two indoor positioning approach. Using the results of BLE updates the position calculated by PDR.
## 2 Methodology
The overall process of this experiment is shown in Figure 1. The starting point is known in advance. In the beginning, the PDR is used to locate the person's position. When the pedestrian stops for a while, the method changes and the results of BLE with DDC replaces the former location calculated by PDR at that time. While the pedestrian keeps going, with the updated coordinate as a new initial position, the error caused by time in PDR would be reduced. The following sections will explain more in each step.
### Trilateration
Trilateration is a classical positioning technique that utilizes the estimated distances to determine the location of the target. Each distance between transmitter and receiver acts as the radius of a circle, and the intersection of circles is the location of the receiver, which is represented in Figure 2.
Since the estimated distance doesn't exactly correct, the intersection is an area instead of a point. As a result, the least squares method is adopted to determine the optimal solution. The equation is described in the following equation ([PERSON], 2007):
\[\delta\overline{D}=D_{i}-\bar{D}_{i}=H\delta\tilde{r} \tag{1}\]
The parameter above can be represented by Equation (2) to (5):
\[D_{i}=\sqrt{(x-x_{i})^{2}+(y-y_{i})^{2}} \tag{2}\]
\[\bar{D}_{i}=\sqrt{(\bar{x}-x_{i})^{2}+(\bar{y}-y_{i})^{2}} \tag{3}\]
\[H=\begin{pmatrix}\frac{x-x_{i}}{D_{i}}&\frac{\bar{y}-y_{i}}{D_{i}}\\ \frac{x-x_{i}}{D_{i}}&\frac{\bar{y}-y_{i}}{D_{i}}\\ \frac{z}{D_{i}}&\frac{\bar{z}}{D_{i}}\\ \frac{z-x_{n}}{D_{n}}&\frac{\bar{y}-y_{n}}{D_{n}}\end{pmatrix} \tag{4}\]
\[\delta\tilde{r}=\begin{bmatrix}x-\bar{x}\\ y-\bar{y}\end{bmatrix} \tag{5}\]
where \(D_{i}=\text{real distance between the ith beacon and the target}\)
\(\bar{D}_{i=}\text{estimated distance between the ith beacon and the target}\)
\(\delta\overline{D}=\text{error of the distance}\)
\(H=\text{design matrix}\)
\(\delta\tilde{r}=\text{error between the real and estimated}\)
coordinate of the target
\(x_{i},y_{i}=\text{coordinates of ith beacon}\)
\(x_{i},y=\text{real coordinates of the target}\)
\(\tilde{x}_{i},\hat{y}=\text{estimated coordinates of the target}\)
Next, the least squares method shown in Equation (6) is utilized to minimize the sum of the squares of the errors.
\[\delta\tilde{r}=(H^{\text{T}}H)^{-1}H^{\text{T}}\;\delta\overline{D}=\begin{bmatrix} x-\bar{x}\\ y-\bar{y}\end{bmatrix}=\begin{bmatrix}\delta x\\ \delta y\end{bmatrix} \tag{6}\]
Finally, Equation (7) shows the optimal solution can be obtained by iteration.
\[\begin{bmatrix}\delta\\ \delta\end{bmatrix}_{k+1}=\begin{bmatrix}\delta\\ \delta y\end{bmatrix}_{k}+\begin{bmatrix}\delta x\\ \delta y\end{bmatrix}_{k} \tag{7}\]
### Differential Distance Correction
The distance converted from RSSI is an estimated value owing to the influenced by the environment. Consequently, the method called differential distance correction ([PERSON] et al., 2017), which is similar to DGMSS is proposed. The basic assumption of that approach is that the two stations not far from each other are supposed to be affected by the same effect in the environment. Given A known station, reference station in the field, there are two distances between the beacon and the reference station. One is the real distance calculated by Euclidean distance formula and the other is the estimated distance converted from RSSI. The difference between two distances can be regarded as the residual in other near unknown station because of the assumption mentioned above, as Figure 3 shown.
In order to obtain the residual for every location in the field, Inverse Distance Weighted (IDW) is adopted. The residual at the unknown location can be derived from the equation as follow:
\[r_{\text{est}}=\sum_{j=1}^{N}w_{j}\times\tau_{j} \tag{8}\]
where \(r_{\text{est}}=\text{estimated residual from IDW}\)
\(\tau_{j}=\text{estimated residual of the j}^{\text{n}}\) reference station
\(w_{j}=\text{weight of the j}^{\text{n}}\) reference station
Figure 1: The flow chart of experiment
Figure 3: The diagram of differential distance correction
Figure 2: The diagram of trilateration\(w_{j}\) can be defined as Equation (9):
\[w_{j}=\frac{a_{j}^{n}}{\sum_{j=1}^{N}a_{j}^{n}} \tag{9}\]
where \(u\)\(=\) exponent parameter \(d_{j}\)\(=\) distance between the \(j^{\text{n}}\) reference station and unknown location
\(d_{j}\) is derived from Equation (10):
\[d_{j}=\sqrt{\left(X-x_{j}\right)^{2}+\left(Y-y_{j}\right)^{2}} \tag{10}\]
where \(X,Y\)\(=\) coordinates of the unknown location \(x_{j},y_{j}\)\(=\) coordinates of the \(j^{\text{n}}\) reference station
For each beacon, there is one residual map for one epoch. In the experiment, the 3-minutes data is received and is divided into 30 segmentation. Namely, there are 30 residual maps for each beacon. Figure 4 is an example of the residual map with four reference stations (The grid size is 0.5 m x 0.5 m). The initial position calculated by trilateration using original distance will be used to select the corresponding residual in the grid. With the residual, the corrected distance is obtained, and the trilateration will be utilized again to solve the new positioning result.
### Pedestrian Dead Reckoning (PDR)
PDR is an approach based on an initial known position. Through the inertial sensor in the smartphone such as the accelerometer, magnetic compass, and gyroscope, the next position can be propagated ([PERSON] et al, 2011) with Equation (11). As Figure 5 shows, with the position at k epoch, azimuth, and step length, the position at epoch k+1 can be derived.
\[\begin{array}{l}N_{k+1}N_{k+5}L_{k}\times\text{cos}\,\varphi_{k}\\ E_{k+1}\text{=}E_{k+2}\text{=}S_{k}\times\text{sin}\,\varphi_{k}\end{array} \tag{11}\]
where \(N_{k},E_{k}\)\(=\) North and East coordinates at epoch k \(SL_{k}\)\(=\) step length at epoch k \(\varphi_{k}\)\(=\) azimuth at epoch k
In order to determine the next position, step detection, step length estimation, and azimuth recognition are necessary. Firstly, Step detection is needed to understand whether pedestrian walks or not. According to the variety of acceleration values, every step can be recognized by a given threshold. As Equation (12) shows, the acceleration of three dimensions should be considered.
\[a_{total}=\sqrt{a_{x}^{2}+a_{y}^{2}+a_{2}^{2}} \tag{12}\]
where \(a_{x},a_{y},a_{y}\)\(=\) acceleration in x, y, and z axis \(a_{total}\)\(=\) composition of acceleration
Secondly, Step lengths can be calculated by an empirical model ([PERSON] et al, 2011), which is described in the following equation:
\[SL=\left(0.7+a\cdot(H-1.75)+b\cdot\frac{(SF-1.79)+H}{1.75}\right)\cdot c \tag{13}\]
where \(\text{SL}=\) step length \(SF=\) step frequency \(H=\) height of the pedestrian \(a,b=\) two known parameters of the model \(c=\) personal factor that can be trained on-line
Lastly, the magnetic compass provides the absolute azimuth, which can be used directly, while gyroscope is a relative angular velocity, which should further derive the angle by Equation (14). Moreover, the relative angle from the gyroscope isn't useful without an initial angle prepared by the magnetic compass, which is given by Equation (15). The final azimuth form gyroscope is described in Equation (16).
\[Gyro_{k+1}=-\omega_{k+1}\times(t_{k+1}-t_{k})\cdot\frac{180^{\circ}}{\pi} \tag{14}\]
\[Gyro_{A},A_{1}=Gyro_{1}+Mag_{1} \tag{15}\]
\[Gyro_{A},A_{k+1}=Gyro_{k+1}+Gyro_{A} \tag{16}\]
where \(\omega_{k}\)\(=\) angular velocity from gyroscope at epoch k \(t_{k}\)\(=\) time at epoch k \(Gyro_{i}\)\(=\) angle derived from angular velocity at epoch 1 \(Mag_{1}\)\(=\) azimuth from magnetic compass at epoch 1 \(Groscope_{A}\)\(=\) initial azimuth Gyroscope,\(A_{k}\)\(=\) azimuth at epoch k \(Gyroscope_{k}\)\(=\) angle derived from angular velocity at epoch k
With step length and azimuth, the next position can be determined with Equation (11).
Figure 4: The residual map of beacon 2 at the last epoch
Figure 5: The concept of the PDR
## 3 Experiment
The experiment is carried out in the parking garage below the library of National Cheng Kung University (NCKU). The filed is an indoor environment with the length of 20 m and the width of 17 m. The arrangement of the experimental field refers to Figure 6. There are nine beacons and four reference stations in the field. The pedestrian starts with the point named T12, walks clockwise, and stops for 3 minutes when backing to T12. Each of the participants would take 4 rounds for each experiment.
## 4 Results and Discussion
### BLE-based Indoor Positioning with DDC
Figure 7 shows the positioning results of participant 1 stopping in T12. The timing of each is the end of circle1, 2, and 3. The blue points are the original results calculated by trilateration, and the green points are the results with the corrected distance. Table 1 summarizes the BLE positioning results respectively. With Differential Distance Correction, the error decreased noticeably.
### Pedestrian Dead Reckoning
Figure 9(a) shows the trajectory using the magnetic compass shifting upward with time. The probable cause is the influence of the magnetic field. After updating the coordinate calculated by BLE positioning with DDC in T12, it eliminates the upward shifting. The new trajectory is demonstrated in Figure 9(b). From the Table 3, after the position of the end of circle 1, namely the beginning of circle 2, is replaced with corresponding BLE results in the previous section, the coordinate of the end of circle 2 will improve 89.92%. The trajectory using gyroscope is depicted in Figure 9(c)(d), but its statistics in Table 4 don't demonstrate the enhancement of accuracy as expected. The reason why the
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \begin{tabular}{c} positioning \\ results (m) \\ \end{tabular} & RMSE & STD & error & improvement (\%) \\ \hline \multirow{2}{*}{\begin{tabular}{c} End of \\ circle1 \\ \end{tabular} } & original & 11.39 & 7.95 & 8.17 & \multirow{2}{*}{59.98\%} \\ \cline{2-2} \cline{4-4} & DDC & 6.19 & 5.31 & 3.27 & \\ \hline \multirow{2}{*}{\
\begin{tabular}{c} End of \\ circle2 \\ \end{tabular} } & original & 9.85 & 7.18 & 6.76 & \multirow{2}{*}{53.25\%} \\ \cline{2-2} \cline{4-4} & DDC & 6.44 & 5.66 & 3.16 & \\ \hline \multirow{2}{*}{
\begin{tabular}{c} End of \\ circle3 \\ \end{tabular} } & original & 12.28 & 8.82 & 8.57 & \multirow{2}{*}{57.64\%} \\ \cline{2-2} \cline{4-4} & DDC & 7.20 & 6.30 & 3.63 & \\ \hline \end{tabular}
\end{table}
Table 1: The error of BLE positioning result of participant 1
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \begin{tabular}{c} positioning \\ results (m) \\ \end{tabular} & RMSE & STD & error & improvement (\%) \\ \hline \multirow{2}{*}{\
\begin{tabular}{c} End of \\ circle1 \\ \end{tabular} } & original & 9.44 & 6.97 & 6.37 & \multirow{2}{*}{70.80\%} \\ \cline{2-2} \cline{4-4} & DDC & 5.78 & 5.50 & 1.86 & \\ \hline \multirow{2}{*}{
\begin{tabular}{c} End of \\ circle2 \\ \end{tabular} } & original & 9.50 & 6.91 & 6.53 & \multirow{2}{*}{77.18\%} \\ \cline{2-2} \cline{4-4} & DDC & 4.98 & 4.78 & 1.49 & \\ \hline \end{tabular}
\end{table}
Table 2: The error of BLE positioning result of participant 2
Figure 8: The BLE positioning results of participant 2
Figure 6: The illustration of the experiment field
Figure 7: The BLE positioning results of participant 1 accuracy decreases is that the PDR error accumulated with time is still slight and the original results are accurate enough. That is, using the BLE results in the wrong circumstances would worse the PDR results.
Figure 10 is the trajectory of participant 2, while Table 5 and Table 6 show the positioning accuracy before and after. In the part of using the magnetic compass, the improvement is about 48%, while in terms of using the gyroscope, the longer the travelled distance is, the more improvement it is. The improvements of circle 2 and 3 are respectively 29% and 66%, the reason is that the PDR error accumulated with time of circle 3 is larger than that of circle 2. Different from participant 1, the BLE results in this experiment are excellent so that the results of PDR greatly improved.
## 5 Conclusion
In terms of indoor positioning with BLE, the proposed method, DDC, enhances the accuracy in the BLE-based indoor positioning system using trilateration technique. As for the PDR, since the environment effect varies the data from the magnetic compass, the enhancement after updating coordinate with BLE can improve the overall accuracy, while the gyroscope is relatively stable in the experimental filed so that the improvement would be noticeable until the error accumulates up to a certain amount, namely the longer distance. The new trajectories are more accurate overall. Nevertheless, setting the threshold of stopping time and travelled distance in the future to know whether to use BLE to update the coordinate is necessary, or the method would worse the results.
## Acknowledgements
The authors would acknowledge the supports provided by the Ministry of the Interior.
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline PDR using gyroscope & Error(m) & improvement (\%) \\ \hline
2 nd end coordinate & original & 4.87 & 48.18\% \\ \hline
3 rd end coordinate & update with BLE & 5.58 & 47.08\% \\ \hline
3 rd end coordinate & update with BLE & 2.95 & 47.08\% \\ \hline \end{tabular}
\end{table}
Table 5: The error of PDR using magnetic compass of participant 2
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline PDR using magnetic compass & Error(m) & improvement (\%) \\ \hline
2 nd end coordinate & original & 8.68 & 89.92\% \\ \hline
3 rd end coordinate & update with BLE & 11.36 & 38.72\% \\ \hline
4 th end coordinate & update with BLE & 14.73 & 48.36\% \\ \hline \end{tabular}
\end{table}
Table 3: The error of PDR using magnetic compass of participant 1
Figure 10: The trajectory of participant 2
Figure 9: The trajectory of participant 1
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline PDR using magnetic compass & Error(m) & improvement (\%) \\ \hline
2 nd end coordinate & original & 4.87 & 48.18\% \\ \hline
3 rd end coordinate & original & 5.58 & 47.08\% \\ \hline \end{tabular}
\end{table}
Table 5: The error of PDR using magnetic compass of participant 2
\begin{table}
\begin{tabular}{|l|c|c|c|} \hline PDR using gyroscope & Error(m) & improvement (\%) \\ \hline
2 nd end coordinate & original & 3.68 & -13.57\% \\ \hline
3 rd end coordinate & original & 3.61 & -134.67\% \\ \hline
4 th end coordinate & original & 10.78 & 12.77\% \\ \hline
4 th end coordinate & update with BLE & 9.41 & 12.77\% \\ \hline \end{tabular}
\end{table}
Table 4: The error of PDR using gyroscope of participant 1
## References
* [1][PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] (2017) Indoor location based services challenges, requirements and usability of current solutions. Computer Science Review24 (1), pp. 1-12. Cited by: SS1.
* [2][PERSON], [PERSON], and [PERSON] (2015) Comparative analysis of the bluetooth low-energy indoor positioning systems. The 12 th International Conference on Telecommunication in Modern Satellite, Cable and Broadcasting Services76-79. Cited by: SS1.
* [3][PERSON], [PERSON], and [PERSON] (2011) A smart phone based PDR solution for indoor navigation. The 24 th International Technical Meeting of the Satellite Division of the Institute of Navigation1404-1408. Cited by: SS1.
* [4][PERSON], [PERSON], and [PERSON] (2017) Performance analysis of Indoor Positioning using Differential Distance Correction based on Bluetooth Low Energy. The 10 th International Conference on Mobile Mapping Technology48-54. Cited by: SS1.
* [5][PERSON] (2012) Indoor positioning technologies. ETH Zurich. Cited by: SS1.
* [6][PERSON] (2007) Determination of a position using approximate distances and trilateration. Colorado School of Mines. Cited by: SS1.
* [7][PERSON] (2018) Modern indoor living can be bad for your health: new YouGuo survey for VELU skads light on risks of the \"Indoor Generation\". [[https://www.pmenswire.com/news-releases/modern-indoor-living-can-be-bad-for-your-health-new-yougov-survey-for-velux-sheds-light-on-risks-of-the-indoor-generation-300648499.html](https://www.pmenswire.com/news-releases/modern-indoor-living-can-be-bad-for-your-health-new-yougov-survey-for-velux-sheds-light-on-risks-of-the-indoor-generation-300648499.html)]([https://www.pmenswire.com/news-releases/modern-indoor-living-can-be-bad-for-your-health-new-yougov-survey-for-velux-sheds-light-on-risks-of-the-indoor-generation-300648499.html](https://www.pmenswire.com/news-releases/modern-indoor-living-can-be-bad-for-your-health-new-yougov-survey-for-velux-sheds-light-on-risks-of-the-indoor-generation-300648499.html)). Cited by: SS1.
* [8][PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] (2016) Smartphone-based indoor localization with bluetooth low energy beacons. Sensors16 (5), pp. 596. Cited by: SS1.
|
isprs
|
IMPROVED INDOOR POSITIONING USING BLE DIFFERENTIAL DISTANCE CORRECTION AND PEDESTRIAN DEAD RECKONING
|
Y. T. Tang, Y. T. Kuo, J. K. Liao, K. W. Chiang
|
https://doi.org/10.5194/isprs-archives-xliii-b1-2020-193-2020
| 2,020
|
CC-BY
|
isprs/e3634633_a393_485b_8948_961b53543db4.md
|
# Implementation and Improvement of Indoor Wearable UWB/INS Integration Positioning Method
Zongbo Liao
Electronic Information School, Wuhan University, Wuhan, China - [EMAIL_ADDRESS]
Zhenqi Zheng
State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Luoyu Road, Wuhan, China - (iyou, zhengzhenqi)@whu.edu.cn
[PERSON]
###### Abstract
Aiming at the problem that the Ultra-Wide-Band (UWB) positioning accuracy is reduced in the Non-Line-of-Sight (NLOS) environment, a UWB positioning accuracy evaluation mechanism is introduced. This paper analyzes the geometric distribution of UWB equipment theoretically to evaluate the positioning accuracy of the UWB system. Furthermore, it optimizes the geometric distribution of UWB and optimizes the UWB positioning algorithm model to improve the positioning accuracy. A set of UWB positioning accuracy estimation process is proposed. Through theoretical analysis and simulation, a model of the influence of the geometric distribution of base stations on the UWB positioning precision is established. The obtained model provides a reference for setting and adjusting the measurement noise in the Kalman filter for UWB/Inertial Navigation System (INS) integration positioning, which improves the combined positioning accuracy.
UWB, INS, Kalman Filter, Positioning, DOP. IEEEexample:BSTcontrol
## 1 Introduction
Outdoor positioning technology based on Global Navigation Satellite System (GNSS) or GNSS/ Inertial Navigation System (INS) integration has been successfully applied and commercialized ([PERSON], 2012). However, in indoor areas, the availability of GNSS signals can be poor or even unavailable. Therefore, additional sensors are required to achieve indoor positioning ([PERSON] et al., 2019). The algorithms currently used for indoor positioning mainly include geometric positioning, dead reckoning, database matching, and their integration. Among them, geometric positioning, especially distance-based multilateration, is the most commonly used method.
Most indoor positioning solutions now use wireless technologies such as WIFI, ZigBee, and Ultra-Wide-Band (UWB). Among them, UWB has the characteristics of low power consumption, large bandwidth, high-speed communication, high time resolution, high data rate, and short wavelength ([PERSON] and [PERSON], 2007). Thus, this paper chooses UWB as a part of the positioning solution. A challenge for UWB positioning is that its ranging precision and maximum measuring distance will be significantly reduced due to the influence of Non-Line-of-Sight (NLOS) and multipath. Therefore, this paper adopts the integrated positioning system of UWB/INS to improve the positioning precision. Once initialized, the INS can provide independent navigation without the need to receive external signals or interact with the external environment. This feature ensures navigation continuity and reliability when UWB performance is degraded by NLOS or multipath effects ([PERSON] and [PERSON], 2021).
This paper will build a personnel positioning platform based on UWB/INS integration and implement the corresponding Extended Kalman Filter (EKF) algorithm for wearable application scenarios. On this basis, performance improvements are made for the difficulties in practical applications. Aiming at the challenge of UWB ranging and positioning precision in NLOS environments, a UWB positioning precision evaluation mechanism is introduced. Specifically, this paper theoretically analyzes the number of UWB devices and their geometric distribution to evaluate the positioning precision of UWB systems, and further optimizes the number and geometric distribution of UWB devices, to optimize the UWB positioning algorithm model to improve the positioning precision.
## 2 UWB and INS Positioning Algorithms and Analysis
The specific contents of this chapter are arranged as follows: Sections 2.1 and 2.2 introduce the algorithms of UWB positioning and INS respectively; Section 2.3 analyzes the UWB positioning accuracy, applies the Dilution of Precision (DOP) to the UWB positioning system, and gives the derivation of DOP; Section 2.4 introduces the Kalman filter algorithm that fuses INS and UWB information.
### INS Mechanization
INS is currently one of the most important autonomous navigation systems. It uses 3D gyros and accelerometers to measure 3D angular velocities and specific forces, respectively. The measurements are used to continuously track the 3D attitude between the device's body frame (i.e., b-frame) and the navigation frame (i.e., n-frame). With the obtained attitude, the specific force vector can be transformed from the projection in the b-frame to that in the n-frame. Then, the gravity vector is added to the specific force to get the device acceleration vectorin the n-frame. Finally, the acceleration vector is integrated once and twice to determine the 3D velocity and position changes respectively ([PERSON], 2014). The specific process is shown in Figure 1.
### UWB Positioning Algorithms
The basic principle of the UWB positioning system is similar to that of the GNSS positioning system. The UWB base station and mobile tag have a similar role to the satellite and receiver in the GNSS positioning system, respectively. The principle of geometric positioning can be used to complete indoor positioning. Therefore, the mathematical model for UWB positioning is
\[\hat{\rho}_{i}\!=\!\sqrt{\left(x_{i}-x_{x}\right)^{2}+\left(y_{i}-y_{x}\right) ^{2}+\left(z_{i}-z_{x}\right)^{2}}+b_{\rho}+\epsilon_{\rho,i} \tag{1}\]
where \(\left(x_{x},\,y_{x},\,z_{u}\right)\) is the node coordinate to be estimated; \(\left(x_{i},\,y_{i},\,z_{i}\right)\) is the coordinate of the base station; \(\tilde{\rho}_{i}\) is the measurement value of the distance from the node to the i-th base station; \(b_{\rho}\) is the bias in the distance measurement value; \(\epsilon_{\rho,i}\) is the random error in the distance measurement.
After error perturbating and linearizing Equation (1), the ranging error model
\[\Delta\rho_{i}\!=\!-\frac{x_{i}-\tilde{x}_{x}}{\tilde{r}_{i}}\Delta x_{i}\!- \!\frac{y_{i}-\tilde{y}}{\tilde{r}_{i}}\Delta y_{i}\!-\!\frac{z_{i}-\tilde{z} }{\tilde{r}_{i}}\Delta z_{i}\!+\!\Delta b_{\rho}\!+\!\epsilon_{\rho,i} \tag{2}\]
where
\[\Delta\rho_{i}\!=\!\rho_{i}\!-\!\hat{\rho}_{i}\] \[\Delta x_{i}\!=\!x_{i}\!-\!\tilde{x}_{i}\] \[\Delta y_{i}\!=\!y_{i}\!-\!\tilde{y}_{i}\] \[\Delta z_{i}\!=\!z_{i}\!-\!\tilde{z}_{i}\] \[\Delta b_{\rho}\!=\!b_{\rho}\!-\!\tilde{b}_{\rho}\] \[\hat{r}_{i}\!=\!\sqrt{\left(\tilde{x}_{i}-x_{i}\right)^{2}+ \left(\tilde{y}_{i}\!-\!y_{i}\right)^{2}\!+\left(\tilde{z}_{i}\!-z_{i}\right) ^{2}}\]
- stands for approximate distance.
After measuring the distance from the tag node to multiple base stations, the ranging error model can be constructed in the form of a matrix.
\[\mathbf{z}\!=\!\mathbf{H}\mathbf{x}+\mathbf{v} \tag{3}\]
where
\[\mathbf{z}\!=\!\left[\begin{array}{c}\Delta\rho_{1}\\ \Delta\rho_{2}\\ \vdots\\ \Delta\rho_{n}\end{array}\right]\] \[\mathbf{H}\!=\!\left[\begin{array}{cccc}-\frac{x_{1}-\tilde{x}_ {x}}{\tilde{r}_{1}}&\frac{y_{1}-\tilde{y}_{x}}{\tilde{r}_{1}}&\frac{z_{1}- \tilde{z}_{x}}{\tilde{r}_{1}}&1\\ -\frac{x_{2}-\tilde{x}_{x}}{\tilde{r}_{2}}&\frac{y_{2}-\tilde{y}_{x}}{\tilde{ r}_{2}}&\frac{z_{2}-\tilde{z}_{x}}{\tilde{r}_{2}}&1\\ \vdots&\vdots&\vdots&1\\ -\frac{z_{n}-\tilde{z}_{x}}{\tilde{r}_{n}}&-\frac{z_{n}-\tilde{z}_{x}}{\tilde{ r}_{n}}&-\frac{z_{n}-\tilde{z}_{x}}{\tilde{r}_{n}}&1\end{array}\right]\] \[\mathbf{x}\!=\!\left[\begin{array}{c}\Delta x\\ \Delta y\\ \Delta z\\ \Delta b_{\rho}\end{array}\right]\] \[\mathbf{v}\!=\!\left[\begin{array}{c}\epsilon_{\rho,1}\\ \epsilon_{\rho,2}\\ \vdots\\ \epsilon_{\rho,n}\end{array}\right]\]
The least square method can be used to estimate the error vector \(\mathbf{x}\).
\[\mathbf{x}\!=\!\left(\mathbf{H}^{T}\mathbf{R}^{-1}\mathbf{H}\right)^{-1}\! \mathbf{H}^{T}\mathbf{R}^{-1}\mathbf{z} \tag{4}\]
where
\[\mathbf{R}\!=\!\left[\left(\mathbf{v}\!-\!\mathbf{E}(\mathbf{v})\right)\left( \mathbf{v}\!-\!\mathbf{E}(\mathbf{v})\right)^{T}\right]\]
After \(\mathbf{x}\) is estimated, it is fed back to the navigation state vector \(\mathbf{X}\) and \(\mathbf{x}\) is cleared, as shown in Equation (5). Keep iterating until the least square converges. One way to judge convergence is whether the modular value of the coordinate error vector in \(\mathbf{x}\) is less than a preset threshold value. When the value is less than the threshold value, the least square method is judged to converge at this time and the iteration is finished. When the number of iterations exceeds the corresponding threshold value, there is still no convergence, then the solution of the position least square method fails. Navigation state vector \(\mathbf{X}\) is the position coordinate to be estimated, and the initial value can be set to the positioning result at the last time.
\[\mathbf{X}\!=\!\mathbf{X}\!+\!\mathbf{x},\,\mathbf{x}\!=\!0 \tag{5}\]
### Analysis of UWB Positioning Accuracy
DOP describes the geometric strength of the configuration of the visible satellite on the GPS accuracy ([PERSON] et al., 2015). The UWB positioning system is similar to the GNSS positioning system. Thus, in this paper, DOP is applied to the UWB positioning system. The number and geometric distribution of UWB base stations and the DOP value are theoretically analyzed. The DOP value is further used to evaluate the positioning precision of the UWB system and optimize its base station number and geometry, to optimize the parameter settings of the positioning algorithm to achieve a better indoor positioning solution. It can be seen from Table 1 that the smaller the DOP value, the better the geometric distribution.
Figure 1: INS mechanization process.
If the random error term of Equation (3) is ignored
\[\mathbf{x}=(\mathbf{H}^{T}\mathbf{H})^{-1}\mathbf{H}^{T}\mathbf{z} \tag{6}\]
Then the covariance matrix of \(\mathbf{x}\) is
\[\begin{split}\operatorname{cov}\left(\mathbf{x}\right)& =\operatorname{E}(\mathbf{x}\mathbf{x}^{T})\\ &=\operatorname{E}((\mathbf{H}^{T}\mathbf{H})^{-1}\mathbf{H}^{T} \mathbf{z}\mathbf{z}^{T}\mathbf{H}(\mathbf{H}^{T}\mathbf{H})^{-\tau})\\ &=(\mathbf{H}^{T}\mathbf{H})^{-1}\mathbf{H}^{T}\mathbf{z} \mathbf{z}^{T}\mathbf{H}(\mathbf{H}^{T}\mathbf{H})^{-\tau}\\ &=(\mathbf{H}^{T}\mathbf{H})^{-1}\mathbf{H}^{T}\operatorname{cov }\left(\mathbf{z}\right)\mathbf{H}(\mathbf{H}^{T}\mathbf{H})^{-\tau}\end{split} \tag{7}\]
\(\operatorname{cov}\left(\mathbf{z}\right)\) represents the ranging accuracy of UWB. Here it is assumed that they all have the same variance as \(\sigma_{n}\).
\[\begin{split}\operatorname{cov}\left(\mathbf{x}\right)& =\operatorname{E}(\mathbf{x}\mathbf{x}^{T})\\ &=\sigma_{n}^{2}(\mathbf{H}^{T}\mathbf{H})^{-\tau}\\ &=\sigma_{n}^{2}(\mathbf{H}^{T}\mathbf{H})^{-1}\end{split} \tag{8}\]
Let \(\mathbf{Q}_{p}=(\mathbf{H}^{T}\mathbf{H})^{-1}\),then
\[\begin{bmatrix}\sigma_{s}^{2}&\operatorname{cov}\left(\mathbf{x}, \mathbf{y}\right)&\operatorname{cov}\left(\mathbf{x},\mathbf{z}\right)& \operatorname{cov}\left(\mathbf{x},\mathbf{b}\right)\\ \operatorname{cov}\left(\mathbf{y},\mathbf{x}\right)&\sigma_{s}^{2}& \operatorname{cov}\left(\mathbf{y},\mathbf{z}\right)& \operatorname{cov}\left(\mathbf{y},\mathbf{b}\right)\\ \operatorname{cov}\left(\mathbf{z},\mathbf{x}\right)&\operatorname{cov}\left( \mathbf{z},\mathbf{y}\right)&\sigma_{i}^{2}&\operatorname{cov}\left( \mathbf{z},\mathbf{b}\right)\\ \operatorname{cov}\left(\mathbf{b},\mathbf{x}\right)&\operatorname{cov}\left( \mathbf{b},\mathbf{y}\right)&\operatorname{cov}\left(\mathbf{b},\mathbf{ z}\right)&\sigma_{b}^{2}\end{bmatrix}\end{bmatrix} \tag{9}\]
We can write,
\[\begin{bmatrix}\sigma_{s}\\ \sigma_{s}\\ \sigma_{s}\\ \sigma_{s}\end{bmatrix}=\sigma_{n}^{\begin{bmatrix}\sqrt{G_{s\alpha}}\\ \sqrt{G_{s\alpha}}\\ \sqrt{G_{s\alpha}}\\ \sqrt{G_{s\alpha}}\end{bmatrix} \tag{10}\]
Then, the DOP values in the east, north, and elevation directions can be obtained as
\[\begin{split} DOP_{\delta}&=\sqrt{G_{s\alpha}}\\ DOP_{\delta}&=\sqrt{G_{s\eta}}\\ DOP_{0}&=\sqrt{G_{s\alpha}}\end{split} \tag{11}\]
Furthermore, the DOP values for the horizon, vertical, and 3D directions are calculated as
\[\begin{split} DOP_{\delta\alpha}&=\sqrt{G_{s\alpha}+G_{s\eta}}\\ DOP_{\delta\alpha}&=\sqrt{G_{s\alpha}}\\ DOP_{\delta\alpha}&=\sqrt{G_{s\alpha}+G_{s\eta}+G_{s\alpha}} \end{split} \tag{12}\]
The positioning precision of UWB is affected by the combined effect of measurement error and the geometric distribution of UWB base stations. Measurement errors and deviations can be expressed as the User Equivalent Range Errors (UERE). If the measurement errors of all UWB base stations are the same and independent, the UERE definition can be the square root of the various errors and deviations. Multiplying UERE by the \(DOP_{P}\) value gives the expected precision of UWB positioning, as shown in Equation (13) ([PERSON], 1999)
\[\begin{split} UWB\ Position\ accuracy=UERE\ \times\ DOP_{P}\end{split} \tag{13}\]
### INS and UWB Information Fusion
In this paper, Kalman filter is used to fuse the information output by UWB and INS. The state equation and measurement equation are
\[\begin{cases}\delta\mathbf{r}_{\mathbf{i}+\mathbf{i}}=\mathbf{\Phi}_{+ \mathbf{i}+\mathbf{i}},\delta\mathbf{r}_{\mathbf{i}}+\omega_{h}\\ z_{\mathbf{i}+\mathbf{i}}=\mathbf{H}_{\mathbf{i}+\mathbf{i}},\delta \mathbf{r}_{\mathbf{i}+\mathbf{i}}+\
u_{\mathbf{i}+\mathbf{i}}\end{cases} \tag{14}\]
where ([PERSON], 1999)
\[\mathbf{\Phi}_{\mathbf{i}+\mathbf{i}+\mathbf{i}}=\begin{bmatrix}I_{3}&T,I_{3}&0 _{3,3}&0_{3,3}&0_{3,3}\\ 0_{3,3}&I_{3}&T,[C^{\prime}_{s}s]\times T,C^{\prime}_{s}&0_{3,3}\\ 0_{3,3}&0_{3,3}&I_{3}&-T,C^{\prime}_{s}\\ 0_{3,3}&0_{3,3}&0_{3,3}&I_{3}&0_{3,3}\\ 0_{3,3}&0_{3,3}&0_{3,3}&I_{3}\end{bmatrix}\]
\[\begin{split}\delta\mathbf{r}_{\mathbf{i}}=\begin{bmatrix}\delta \mathbf{p}_{\mathbf{i}}&\delta\mathbf{r}_{\mathbf{i}}&\epsilon_{a}&a_{b}& \omega_{h}\end{bmatrix}^{T}\\ \mathbf{H}_{\mathbf{i}+\mathbf{i}}=\begin{bmatrix}I_{3}&0_{3,12}\end{bmatrix} \end{split}\]
where \(\delta\mathcal{p}_{\mathbf{i}}\), \(\delta\mathbf{r}_{\mathbf{i}}\), \(\epsilon_{a}\), \(a_{b}\), \(\omega_{h}\) is the error of position, velocity, attitude, accelerometer bias, and gyro bias, respectively; \(\mathfrak{s}_{h}\) is the acceleration information collected by the IMU; \(T_{s}\) is the time interval of the IMU output; \(\omega_{h}\) is the system noise; \(\
u_{\mathbf{i}+\mathbf{i}}\) is the measurement of UWB positioning noise. The prediction formula of the Kalman filter is
\[\begin{cases}\delta\mathbf{r}_{\mathbf{i}+\mathbf{i}}^{2}=\mathbf{\Phi}_{+ \mathbf{i}+\mathbf{i}},\delta\mathbf{r}\\ \mathbf{P}_{\mathbf{i}+\mathbf{i}}^{2}=\mathbf{\Phi}_{+\mathbf{i}+\mathbf{i}}, \mathbf{P}_{\mathbf{i}}\mathbf{\Phi}_{\mathbf{i}+\mathbf{i}}^{2}+\mathbf{Q}_{ \mathbf{i}}\end{cases} \tag{15}\]
The measurement equation is
\[\begin{cases}\mathbf{K}_{\mathbf{i}+\mathbf{i}}=\mathbf{P}_{\mathbf{i}+ \mathbf{i}},\mathbf{H}_{\mathbf{i}+\mathbf{i}}^{2},[\mathbf{H}_{\mathbf{i}+ \mathbf{i}},\mathbf{P}_{\mathbf{i}+\mathbf{i}}^{2},\mathbf{H}_{\mathbf{i}+ \mathbf{i}}^{2}+\mathbf{R}_{\mathbf{i}+\mathbf{i}}]^{-1}\\ \delta\mathbf{r}_{\mathbf{i}+\mathbf{i}}=\delta\mathbf{r}_{\mathbf{i}+
## 3 Experiment and Result Analysis
### Simulation of DOP value
In the UWB positioning system, the DOP value depends on the geometric distribution of the base stations. In this paper, the exhaustive method is used to obtain the DOP simulation results of setting four base stations in a 25 m * 20 m scene and find the base station arrangement with the smallest average DOP value. Figures 2 and 3 show the DOP distributions of two different base station arrangements of'solution 1' and'solution 2'. The DOP value distribution diagram shown in Figure 2 is that four UWB base stations are arranged in the four corners of the scene, and its DOP values are the smallest, which has the best geometric distribution.
The UWB positioning is simulated by the base station arrangement of'solution 1', and the Equation (13) is verified. It is assumed here that the measurement errors of all UWB base stations are the same and independent. Afterward, four test points and four UREE values are set to test the relation between DOP and positioning accuracy. For each combination of test point and UREE, 100,000 simulations are run, the RMS value of the error is calculated. The statistical results are shown in Table 2.
It can be seen from Figure 4 that the positioning accuracy of UWB varies with the changes of UREE and DOP values. Under a certain UREE, with the increase of the DOP value, the RMS value of the positioning error will also increase; at a certain test point, the RMS of the positioning error will increase with the increase of the UREE. As can be seen from Table 2, for a certain test point, such as test point (12.06, 10.36), its DOP value is 1.025, and when the UREE is 0.1, 0.5, and 1, its positioning error
as shown in Figure 6. Figure 7 shows the comparison between the calculated walking trajectories and the real trajectories under these two different base station arrangement solutions. The reference trajectories are obtained by setting the traveling trajectories on the area in advance and measuring them at the corners of the trajectories. As can be seen from Figure 7, the trajectories obtained by these two UWB base station arrangements are not much different, the positioning results are continuous and close to the real trajectory, and the positioning precision is at the decimetre-level.
This is because the average DOP values of the two base station solutions are 1.129 and 1.556, which are both between 1 and 2. According to the DOP ratings in Table 1, these two solutions are ideal. In addition, these two trajectories were completed in two experiments respectively. During the process of traveling, there may be slight deviations in the trajectories of people walking, and there may also be ranging errors when measuring the corner points, resulting in the actual walking track and the reference track may be different, and there is a certain error. Therefore, the positioning precision of'solution 1' is not significantly better than that of'solution 2'.
However, in an indoor environment, the positioning performance may be affected by NLOS and multipath due to various occlusions such as walls, resulting in a larger ranging error and a smaller maximum ranging distance. This paper chooses the positioning method of UWB/INS to reduce the influence of NLOS.
### Experiment of UWB/INS
In this paper, the indoor test is carried out through experiment/simulation. During the experiment, the 'Weartrack' module is used to provide the true value of the position reference. The 'Weartrack' module is a small low-power wearable positioning system, which was developed by the MOTION team of LLESMARS of Wuhan University and the i2 Nav team of the Satellite Navigation and Positioning Technology Research Centre of Wuhan University. The system can achieve decimeter-level positioning precision after sufficient landmark point correction and forward-backward smoothing.
The position provided by the 'Weartrack' module is used to calculate the true value of the distance from the tag node to the four base stations. Then, NLOS errors are added to the true distance value to simulate the actual distance measurement value. Finally, simulated distance measurement values are used to perform the positioning solution. The positioning results are compared with the reference truth of the positioning results provided by 'Weartrack' to evaluate the positioning precision.
To verify the influence of the uncertainty of UWB positioning results on UWB/INS positioning, this paper sets two different
Figure 5: Test environment of the UWB positioning system.
Figure 8: Comparison of reference trajectory, UWB trajectory, and UWB/INS integration trajectory when \(\delta_{0}=\delta_{1}=\delta_{2}=0.02\) (\(\delta_{0},\ \delta_{1},\ \delta_{2}\) are the diagonal elements of the observation noise covariance matrix).
Figure 6: UWB tag installation.
Figure 7: Comparison of the reference trajectory and the trajectory obtained by the two base station arrangements of ‘solution 1’ and ‘solution 2’.
constant values for the observation noise covariance matrix \(\mathbf{R}\), which has components of
\[\mathbf{R}=\left[\begin{array}{ccc}\delta_{0}^{2}&0&0\\ 0&\delta_{1}^{2}&0\\ 0&0&\delta_{2}^{2}\end{array}\right]\]
Here \(\delta_{0}=\delta_{1}=\delta_{2}=0.02\) and \(\delta_{0}=\delta_{1}=\delta_{2}=0.2\) are set. The trajectory figure and error figure of the two observation noise covariance matrices are shown in Figures 8 to 11.
It can be seen from Figures 8 to 11 that after using UWB/INS integration, the localization error is reduced, and the trajectory is smoother. The combined positioning results of the two observation noise covariance matrices with different constants are compared, as shown in Table 3. Without the influence of NLOS, using UWB/INS only reduced the MEAN and RMS of errors by only 2-4 cm. By contrast, under the influence of NLOS, the maximum positioning error of UWB positioning reaches 1.5 m; after using UWB/INS, the maximum error is reduced by nearly 0.4 m, with an accuracy improvement of 26.7 %, which has a certain inhibitory effect on NLOS.
The indoor locating scene in the actual application, because the influence of the UWB base station to tag node won't be the same, some obstructions between the base station and tag caused the ranging error is bigger, the reliability of the base station to drop. Thus, in the UWB positioning results are estimates, should consider the weight of each base station's influence on the tag node. In addition, the observed noise covariance matrix should change with the movement of pedestrians at different positions. When UWB is affected by NLOS or multipath, the reliability of UWB positioning results is weakened. At this time, the observation noise covariance matrix should be set larger. This reflects the importance of setting the observation noise covariance matrix for integrated navigation performance.
## 4 Conclusions
Although UWB positioning is a highly precise method, its precision is affected by bad factors such as NLOS. In this paper, DOP is applied to the UWB positioning system, and a set of calculation processes of UWB positioning precision and reliability is proposed. Through theoretical analysis and simulation, a model of the influence of base station geometric distribution on UWB positioning precision in the test scene is established. In the indoor positioning, it provides a reference for the arrangement of the UWB base station. As shown in Table 3, the combined positioning method of UWB/INS has reduced the STD, MEAN, RMS, and MAX of the positioning error compared with the positioning method that only uses UWB technology, which reduces the influence of NLOS and improves the positioning precision and robustness. The maximum error is reduced by nearly 0.4 m, with an accuracy improvement of 26.7 %. However, the STD of position errors is only reduced by 1-2 cm, while MEAN and RMS are both reduced by 2-3 cm. This is because the noise covariance matrix here is set to a
Figure 11: Comparison of the positioning error of UWB and the positioning error of UWB/INS integration when \(\delta_{0}=\delta_{1}=\delta_{2}=0.2\) (\(\delta_{0},\ \delta_{1},\ \delta_{2}\) are the diagonal elements of the observation noise covariance matrix).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & STD/m & MEAN/m & RMS/m & MAX/m \\ \hline \(\delta_{0,1,2}=0.02\) & 0.2946 & 0.4813 & 0.5643 & 1.1713 \\ \hline \(\delta_{0,1,2}=0.2\) & 0.2892 & 0.4986 & 0.5764 & 1.1331 \\ \hline UWB & 0.3071 & 0.5172 & 0.6015 & 1.5283 \\ \hline \end{tabular}
\end{table}
Table 3: Statistical values of position errors for UWB/INS integration with various measurement noises.
constant value, so the positioning accuracy does not improve much. This also reflects the importance of the setting of the observation noise covariance matrix to the UWB/INS positioning system.
## Acknowledgements
This work was supported in part by the Major Basic Research Project (5140503A0301), the National Natural Science Foundation of China (42174050), the Provincial and Municipal \"Double First-Class\" Construction Project (600460035), and the Hubei Luojia Laboratory Special Fund.
## References
* [1] [PERSON]. Proximity and the evolution of collaboration networks: evidence from research and development projects within the global navigation satellite system (GNSS) industry[J]. Regional studies, 2012, 46(6): 741-756.
* [2] [PERSON], [PERSON], [PERSON]. An indoor position-estimation algorithm using smartphone IMU sensor data[J]. Ieee Access, 2019, 7: 11165-11177.
* [3] [PERSON], [PERSON]. Short-range wireless communications for next-generation networks: UWB, 60 GHz millimeter-wave WPAN, and ZigBee[J]. IEEE Wireless Communications, 2007, 14(4): 70-78.
* [4] [PERSON] [PERSON], [PERSON] Indoor navigation: State of the art and future trends[J]. Satellite Navigation, 2021, 2(1): 1-23.
* [5] [PERSON], \"Principles of GNSS, inertial, and multisensor integrated navigation systems, 2 nd edition (Book review),\" in IEEE Aerospace and Electronic Systems Magazine, vol. 30, no. 2, pp. 26-27, Feb. 2015, doi: 10.1109/MAES.2014.14110.
* [6] [PERSON], [PERSON], [PERSON], et al. Analysis of DOP and its preciseness in GNSS position estimation[C]/2015 International conference on electrical engineering and information communication technology (ICEEICT). IEEE, 2015: 1-6.
* [7] [PERSON], \"Dilution of Precision,\" GPS World, pp. 52-59, 1999.
* [8] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], \"Investigation of GDOP for Precise user Position Computation with all Satellites in view and Optimum four Satellite Configurations,\" J. Ind. Geophys. Union, vol. 13, noJ, pp. 139-148, July 2009.
* [9] [PERSON]. Quality control and GPS[M]/GPS for Geodesy. Springer, Berlin, Heidelberg, 1998: 271-318.
* [10] [PERSON], [PERSON] The global positioning system and inertial navigation[M]. New York: Mograw-hill, 1999.
* [11] [PERSON], [PERSON], [PERSON], et al. Study on UWB/INS integration techniques[C]/2011 8 th Workshop on Positioning, Navigation and Communication. IEEE, 2011: 13-17.
|
isprs
|
IMPLEMENTATION AND IMPROVEMENT OF INDOOR WEARABLE UWB/INS INTEGRATION POSITIONING METHOD
|
Z. Liao, Z. Zheng, Y. Li
|
https://doi.org/10.5194/isprs-archives-xlvi-3-w1-2022-111-2022
| 2,022
|
CC-BY
|
isprs/b66d51a8_d478_4cc5_9e5d_a360918aa07e.md
|
# Land Subsidence Monitoring Using PS-InSAR Technique for L-Band SAR Data
###### Abstract
Differential SAR-Interferometry (D-InSAR) is one of the potential source to measure land surface motion induced due to underground coal mining. However, this technique has many limitation such as atmospheric in homogeneities, spatial de-correlation, and temporal decorrelation. Persistent Scatterer Interferometry synthetic aperture radar (PS-InSAR) belongs to a family of time series InSAR technique, which utilizes the properties of some of the stable natural and anthropogenic targets which remain coherent over long time period. In this study PS-InSAR technique has been used to monitor land subsidence over selected location of Jharia Coal field which has been correlated with the ground levelling measurement. This time series deformation observed using PS InSAR helped us to understand the nature of the ground surface deformation due to underground mining activity
([PERSON]\({}^{\rm a}\), [PERSON]\({}^{\rm e}\), [PERSON]\({}^{\rm b}\), [PERSON]\({}^{\rm a}\)) +
Footnote †: Corresponding author. (email: [EMAIL_ADDRESS])
([PERSON]\({}^{\rm a}\), [PERSON]\({}^{\rm e}\), [PERSON]\({}^{\rm b}\), [PERSON]\({}^{\rm a}\)) +
Footnote †: Corresponding author. (email: [EMAIL_ADDRESS])
Differential Interferometric synthetic (D-InSAR), Persistent Scatterer Interferometry, Levelling, Anthropogenic targets
## 1 Introduction
In India, Jharia Coal field covers the area of about 456 sq.km, situated 260 km northwest of Calcutta in the heart of Damodar valley. Jharia has witness intensive effort of exploitation since 1894 till nationalization of coal in 1972 due to unscientific techniques of mining. As a result, land subsidence phenomena has been seen in many places. Different types of subsidence phenomena can take place over the coal mining area:
1. Continuous Subsidence-Involves formation of gentle depression over broad area. This type of subsidence is associated with thin, horizontal or lateral dipping one bodies is overlaid by weak and non-brittle strata, mostly seen in the case of long wall mining.
2. Discontinuous Subsidence- It is characterized by large surface displacement over limited surface area. Such as a).Sink hole -This type of subsidence is very abrupt seen mostly in case of Room and Pillar mining, b).Pillar Collapse-Seen in case of abandoned mining.
All the above mentioned subsidence phenomena in coal mining results in to severe damage of buildings, infrastructure, economy loss.
Ground based techniques like GPS and Levelling can measure lands subsidence phenomena up to cm to mm level precision. GPS can give all the three component of land subsidence with highest temporal resolution for a point locations. However, Point wise measurement of GPS reading does not provide required spatial density to identify the occurrence and magnitude of local- small scale land subsidence phenomena ([PERSON], 2003). The Ground based conventional techniques are costly, time consuming, requires man power. InSAR has unique advantage as it can measure land subsidence in the absence of man power over large contiguous area with centimetric accuracy.
However, In most of all interferogram there exist some area which experience decorrelation due to change in scattering properties of surface scatterer with time, change in sensor geometry and due to variation in intervening atmospheric properties.
PS-InSAR is one of the multi time Interferometric technique which addresses both temporal and spatial decorrelation and atmospheric in homogeneities ([PERSON] et al., 2014). In the present study PS-InSAR analysis has been performed over ALOS -1 data for Jharia Coal field. A specific focus is set on Jharia area, where subsidence has been identified using PS-InSAR technique for a small test site, for validation levelling and conventional InSAR measurement has been used.
## 2 Research Area and Acquired Dataset
### Research Area
The research area is situated in Jharia Coal field, Jharkhand, Dhanbad, India. This is the only storehouse of prime coking coal in the country, which has a history of mining since 1894 also population wise it is one of the densely populated coal field in the world.
It is one of the oldest and chief coal field in India. Jharia and it's near around area considered to have undulating and rolling topography, having slope towards east -south - east. The elevation of this area ranges from 240 m in the west to 140m in the south east ([PERSON]. et al., 2012).During nationalization period of 398 mines (184 non- coking and 214 coding) were distributed under the management of Bharat Coking Coal Limited (BCCL), out of which six mine were authorized under the management of Tata Iron and Steel Company (TICSO) and two under Indian Iron and Steel Company (ISCO)([PERSON] et al., 2010). In this area coal mining is the major occupation and it gives employment to 100,000 people which is the 10% of the total population of the area. Out of the total reserves of coal only 25 % of coal have been extracted or consumed by coal firesince inception of the mining. The remaining 75% of coal is still remaining.
### SAR Data Used
SAR Sensors are capable of acquire data irrespective of any weather condition, at any time of the day due to its own source of illumination but there number is strongly limited by the orbital parameters, climatic season and daily weather phenomena. One of the most important criteria to detect land subsidence using SAR Interferometry is selection of most accurate SAR scenes so that the selected pair only deciphers the surface change. A stack of 10 ALOS-1 pair were used from January 30, 2007, to December, 12, 2010 with perpendicular baseline not exceeding 2100 m. The ALOS data were taken along ascending orbit. A digital elevation model (DEM) proposed by Shuttle radar topographic mission of 90 meter resolution were used as source of external DEM to eradicate the effect of topographic effect from the differential interferogram.
## 3 PS Identification: An Adaptive Methodology
As discussed above the main aim of the work is to identify phase stable point scatterer from the number of SAR pairs used. The work flow for the discussed study is given as follows:
1. Standard Interferometric process.
2. SAR data co registration
3. Geo-coding
4. Flattening and
5. Topographic
6. Phase subtraction.
In short PS-processing includes following steps:
1. Master image selection,
2. SAR data co- registration,
3. reflectivity map generation
4. amplitude stability index map,
5. persistent scatterer candidate selection (PSC),
6. PS point selection,
7. multi-image sparse grid phase unwrapping,
8. atmospheric phase screen estimation
9. removal and PS phased reading
10. Displacement estimation.
## 4 Resultsanalysis and Discussion
To investigate the proposed PS-InSAR techniques for the identification of subsidence zone due to underground mining activities in one of the Jharia test site, ALOS 10 scenes from 2007 to 2010 have been processed. All ALOS scene were acquired in ascending nodes. For the restriction of APS model, we have selected eastern part on the Jharia Coal field covering area of 4.9253 Sq. Km. In PS-InSAR are processing workflow nine filtered interferogram were generated. In PS-InSAR processing step for the selection of appropriate PS candidate different selection criteria were experimented. The amplitude stability index plus spatial coherence were used and fixed to 0.78 for the study area. Total pc were selected using further processing in the software LOS velocity map and cumulative displacement map were generated for the time period. Finally all the output were geocoded to validate with the levelling data.
### PS InSAR integration with levelling
The subsidence map obtained from PS-InSAR method were correlated with levelling data acquired from Central Institute of Mining and Fuel Research (CMIFR), it has been observed that for the same area the levelling value shows good agreement with the PS- InSAR result. Levelling data of Jamadoba 67 collinear gives following value:
\begin{tabular}{|l|l|l|l|} \hline S.No & Seam Name & Subsidence (Cm) & Year \\ \hline
1 & XIV / 13 S & 8.1 & 2006-2010 \\ \hline
2 & XI / 12 S & 10 & 2006-2010 \\ \hline
3 & XIV /5S & 0.6 & 2006-2010 \\ \hline
4 & XIV /15S & 30 & 2006-2010 \\ \hline \end{tabular}
Table 1: Levelling Subsidence values from 2006-2010 for the different seam obtained from Central Institute of Mining and Fuel Research.
Figure 1: Surface plan map derived from TATA collitery
Figure 2: Surface Displacement map obtained using PS-InSAR Technique, Levelling site are marked with pink triangle showing corresponding seam number
### PS InSAR integration with D-InSAR Result
Also, we have tried to compare the results of D-InSAR with PS-InSAR result. It has been observed that for the same test site the subsidence rate obtained from D-InSAR processing is coming very close to the PS-InSAR displacement rate. The value obtained for Jamadoba Test site using D-InSAR is coming value of 39.04 cmyr ([PERSON] et al., 2015).
## 5 Conclusion
PS-InSAR is an advanced Interferometric tool, which gives surface deformation history, but the only limiting factor is availability of data. This is very good technique to understand and delineate surface deformation dynamics using large data set. It has been seen from this study that ten scenes of ALOS PALSAR also gives good agreement with the ground based and D-InSAR result. Increasing the number of Data pair will give more close and appropriate results for the surface deformation studies.
## Acknowledgements (Optional)
This work is supported by Dr. [PERSON], Director, Indian Institute of Remote Sensing, Dehradun, Dr. [PERSON], Director, Central Institute of Mining & Fuel Research, Dhanabad. The authors sincerely thank and acknowledge the help and services provided by Mr. [PERSON] of Central Institute of Mining & Fuel Research, Dhanbad.
The authors convey sincere thanks to the staff and officials of the mining agencies namely, Bharat Coking Coal Ltd, Tata Steel, and Steel Authority of India Ltd., Dhanbad for their help and cooperation during field survey and ground data collection.
## References
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON], [PERSON], 2014. Multi-Temporal Analysis of Land Subsidence in Toluca Valley (Mexico) through a combination of Persistent Scatterer Interferometry (PSI) and Historical Piezo metric Data, Advances in Remote Sensing, 3, 49-60.
* [PERSON] et al. (2012) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2012. Settlement Risk Zone Recognition Using High Resolution Satellite Data in Jharia Coal Field, Dhanbad, India, Life Science Journal ; 9(1S), ISSN :1097-8135).
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2015. Detecting, mapping and monitoring of Land Subsidence in Jharia Coalfield, Jharkhand, India by spaceborne differential interferometric SAR, GPS and precision levelling techniques. Journal of Earth system science 124, No 6, pp. 1359-376
* [PERSON] (2003) [PERSON], 2003. Differential SAR Interferometry for Crustal Deformation Study, M.Sc., International Institute for Geo-Information and Earth Observation, the Netherlands.
* [PERSON] et al. (2010) [PERSON], [PERSON], [PERSON], [PERSON],[PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2010. Clusterization of mines for Obtaining Comprehensive Environmental Clearance: A Case Study of BCCL Lease Hold Areas, Journal of Indian School of Mines, Special Volume 2010, pp. 13-20.
Figure 3: The fringe obtained from D-InSAR overlaid by PS-InSAR result
|
isprs
|
LAND SUBSIDENCE MONITORING USING PS-InSAR TECHNIQUE FOR L-BAND SAR DATA
|
S. Thapa, R. S. Chatterjee, K. B. Singh, D. Kumar
|
https://doi.org/10.5194/isprs-archives-xli-b7-995-2016
| 2,016
|
CC-BY
|
isprs/1113c689_0b8f_43cd_8bf4_67da58cca2b7.md
|
# Comprehensive evaluation and analysis of China's mainstream online map service websites
[PERSON]
National Geomatics Center of China, Lianhuachixilu 28, Haidian, Beijing, China, 100830
[PERSON]
China University of Mining and Technology, Xueyuanlu Ding 11, Haidian,Beijing, China, 100830
[PERSON] Wei
National Geomatics Center of China, Lianhuachixilu 28, Haidian, Beijing, China, 100830
[PERSON]
National Geomatics Center of China, Lianhuachixilu 28, Haidian, Beijing, China, 100830
[PERSON]
China University of Mining and Technology, Xueyuanlu Ding 11, Haidian,Beijing, China, 100830
###### Abstract
online map service, fuzzy evaluation mathematical model, evaluation
## 1 Introduction
In recent years, with the rapid development of 3S technology and Internet technology, and the growing popularity of the application of geographic information services for the public, online mapping service with abundant information data, friendly interface, convenient function, and strong interaction, has gradually become an indispensable tool in daily life, which plays a more and more important role in all aspects of economics and social life, and promotes the geographical information industry development, it also brings great economic and social benefits.
In order to promote the online map service industry healthy development, encourage and guide the various website progress and provide an effective quality assurance for service, an effective evaluation system with each own characteristics should be established. As the online map service website is both of generality and individuality, the existing evaluation methods is user evaluation method and traffic statistics law, which is too subjective without definite standards, and the result maybe undependable [1, 2, 3]. Due to the evaluation factors of online mapping service website, and factors are fuzzy, the fuzzy mathematics method was used to make up for the above shortage at certain extent. It is started from a qualitative analysis, and a quantitative result is given through researching the roles of various factors.
This paper aims to solve the difficulty of evaluating websites quantitatively and provide developmentsuggestions in the future according to the evaluation results.
## 2 The status of China's online map service websites
According to the IResearch Consulting Company's research result (Figure 1) \", the China's online map service market scale was less than 1 billion yuan. The increasing application requirements of all trades and walks brought high speed to grow to the market, which appeared a vigorous development trend and achieved 3 billion yuan in 2008. At present, along with development of Chinese geographical information industries, and the application for the public and the interactive functions is growing up, the coverage of online map service will expand further, and the development of the market will be more steadily, which is hopeful to achieve 14 billion yuan\({}^{\left[\xi\cdot\xi\right]}\).
There are about 721,000,000 and 164,000,000 results in Google when searching \"online map service\" and \"internet map\". These data evidenced that online map service is becoming a hot spot in digital age. Google Map and Baidu Map occupied larger market share depending on the search area influence. Based on the professional advantages and characteristic applications, other websites such as MapBar, MapABC, 51 ditu, Sogou also hold parts of the market. As soon as Map Wold came out, which was constructed by National Mapping Bureau (STSP), suffered fully fix eyes upon globally because of fresh data and authority.The group of portal websites led by Tencent and Alibaba also entered in the area, which attracted many young users through quick response and excellent interaction based on their technical advantages.
## 3 Fuzzy comprehensive evaluation
In order to select representative ones among so many online map service websites, the most basic selection standards were ensured based on the characters of online map service websites:
(1) current potential and strong map data;
(2) quick response;
(3) friendly interface and interaction;
(4) practical functions or unique function;
(5) the secondary developing function;
(6) wide consumer base.
According to the above standards, the following 10 websites in Table 1 were selected as the evaluation objects through investigation and test in this paper.
### The establishment of evaluation index system
Each websites was analysed systematically so as to find out every important factor, including factors with its own characters, and some secondary factors should
\begin{table}
\begin{tabular}{|c|c|} \hline Name & Address \\ \hline Google Map & [[http://maps.google.com/](http://maps.google.com/)]([http://maps.google.com/](http://maps.google.com/)) \\ \hline Baidu MAP & [[http://map.baidu.com/](http://map.baidu.com/)]([http://map.baidu.com/](http://map.baidu.com/)) \\ \hline MapBar & [[http://www.mapbar.com/](http://www.mapbar.com/)]([http://www.mapbar.com/](http://www.mapbar.com/)) \\ \hline MapABC & [[http://www.mapabc.com/](http://www.mapabc.com/)]([http://www.mapabc.com/](http://www.mapabc.com/)) \\ \hline
51 ditu & [[http://www.51](http://www.51) ditu.com/]([http://www.51](http://www.51) ditu.com/) \\ \hline Sogo Map & [[http://map.sogou.com/](http://map.sogou.com/)]([http://map.sogou.com/](http://map.sogou.com/)) \\ \hline Map World & [[http://www.tianditu.com/](http://www.tianditu.com/)]([http://www.tianditu.com/](http://www.tianditu.com/)) \\ \hline SoSo Map & [[http://map.soso.com/](http://map.soso.com/)]([http://map.soso.com/](http://map.soso.com/)) \\ \hline Aliyun & [[http://ditu.aliyun.com/](http://ditu.aliyun.com/)]([http://ditu.aliyun.com/](http://ditu.aliyun.com/)) \\ \hline Bing Map & [[http://cn.bing.com/ditu/](http://cn.bing.com/ditu/)]([http://cn.bing.com/ditu/](http://cn.bing.com/ditu/)) \\ \hline \end{tabular}
\end{table}
Table 1: The selected websites
Figure 1: 2006-2012 China online map service market scale
be ignored properly. A tri-grade evaluation index system was established to assign value and compute. The first-grade indexes contained four aspects as evaluated model and the improved generalized fuzzy operator were used in this paper.
#### 3.2.1 Evaluation indexes
An evaluation factors set could be got from the evaluation index system as shown in equation 1
\[U=\{U_{i},U_{2},U_{3},U_{4}\} \tag{1}\]
where \(U\) represented all the sets of the 2 nd evaluation factors.
The subset of \(U_{i}\)(\(i=1,2,3,4\)) is
\[U_{i}=\{U_{i2},U_{i2},U_{i3},U_{i4},U_{i3}\} \tag{2}\]
where \(U_{i}\) represented the \(2\)nd set of evaluation factors.
\begin{table}
\begin{tabular}{|p{113.8 pt}|p{113.8 pt}|p{113.8 pt}|p{113.8 pt}|p{113.8 pt}|p{113.8 pt}|} \hline
**1 st-grade index \& weight** & **2 nd–grade index \& weight** & **excellent** & **good** & **middle** & **Qualified** & **Unqualified** \\ \hline \multirow{4}{*}{Map data (0.30)} & Accuracy (0.25) & & & & & \\ \cline{2-7} & Authority (0.20) & & & & & \\ \cline{2-7} & variety (0.15) & & & & & \\ \cline{2-7} & integrity (0.10) & & & & & \\ \cline{2-7} & date (0.30) & & & & & \\ \hline \multirow{4}{*}{Function and interaction (0.20)} & color (0.15) & & & & & \\ \cline{2-7} & interface design (0.15) & & & & & \\ \cline{2-7} & function rich (0.20) & & & & & \\ \cline{2-7} & Function accessibility (0.20) & & & & & \\ \cline{2-7} & Friendly interaction(0.30) & & & & & \\ \hline \multirow{4}{*}{Response (0.30)} & load (0.20) & & & & & \\ \cline{2-7} & tile load (0.40) & & & & & \\ \cline{2-7} & search (0.40) & & & & & \\ \hline \multirow{4}{*}{Influence (0.20)} & page view (0.30) & & & & & \\ \cline{2-7} & average time on site (0.20) & & & & & \\ \cline{2-7} & Cite (0.30) & & & & & \\ \cline{2-7} & API (0.20) & & & & & \\ \hline \end{tabular}
\end{table}
Table 2: Comprehensive evaluation index system
According to the evaluation results, there were two websites whose degree of membership was above 0.90 like Baidu Map and Google Map. The degree of membership between 0.80-0.90 was MapBar, SoSo Map, Sogou Map and MapABC, followed by 51 ditu, Aliyun, Bing Map and MapWorld. A conclusion can be drawn from the evaluation that online map service market in China have reached a certain scale, and the websites were not weak to than foreign websites, even ahead of them in some areas.Baidu Map owns the largest amount group based on the on the brand influence in the whole searching market in China and its own technical advantage. Also, its interaction accords with the intellection habit mostly. Google Map has advantages in data, technology and interface, especially its image data and street view, but its service is not stable times to times, causing users lost because of slow response. As the traditional online map service supporters in China, MapBar, MapABC and Sogo Map integrated tourism, repast and entertainment, which is close to people's daily life. Based on the technical advantages in response and interaction, SoSo Map, as a rising star of the online map service market will have a bright future. Those website in the third grade are still in the development stage, such as Map World, which combines data authority and diversity, but its searching experience performed poorly. But its vitality relies on application, and with the development of the civil high-definition survey satellite \"ZY 3 \", the frequency of data updates and data quality will be further improved.
## 4 Conclusion and Discussion
The nature of online map is to digitalize the real world, and support more serialized information than it. Now, the industry of online map service is going along this route steadily, and has made great achievement. During the comprehensive evaluation of online map service, the multiple-grades and fuzzy comprehensive evaluation method was used, which can process the fuzziness between factors objectivity and decrease the subjectivity and blindness in the procedure, so as to ensure the reliability and accuracy of the evaluation results. The purpose of evaluation is not to rank all the websites, but to analyse the problems through the evaluation results, and a clearer direction for the next development step can be pointed out, which provides important practical and theoretical value to the competitive industry.
**References**:
[1] [PERSON], 2007. Fuzzy Sets Theory[M], Beijing Normal University, Beijing.
[2] [PERSON]. Fuzzy Mathematical Method and Application [M], Seismological Press, Beijing.
[3] [PERSON], [PERSON], 2000. Comprehensive Evaluation Method[M], Science Press, Beijing.
[4] [PERSON], 2004. The introduction and evaluation of Chinese internet map websites, Hebei Normal University.
[5] iDataCenter. 2008 China online map service industry development report.
[6] [PERSON]. Discussion on OnlineMap Public Service in Internet Era [M], Henan Provincial Cartographic Institute, Zhengzhou.
\begin{table}
\begin{tabular}{|l|c|c|} \hline Name & Max & Rank \\ & Membership & \\ & degree & \\ \hline Google Map & 0.90 & 2 \\ \hline Baidu MAP & 0.94 & 1 \\ \hline MapBar & 0.89 & 3 \\ \hline MapABC & 0.81 & 6 \\ \hline
51 ditu & 0.75 & 7 \\ \hline Sogo Map & 0.84 & 5 \\ \hline Map World & 0.63 & 10 \\ \hline SoSo Map & 0.87 & 4 \\ \hline Aliyun & 0.73 & 8 \\ \hline Bing Map & 0.70 & 9 \\ \hline \end{tabular}
\end{table}
Table 3: Evaluation results
|
isprs
|
COMPREHENSIVE EVALUATION AND ANALYSIS OF CHINA’S MAINSTREAM ONLINE MAP SERVICE WEBSITES
|
H. Zhang, J. Jiang, W. Huang, Q. Wang, X. Gu
|
https://doi.org/10.5194/isprsarchives-xxxix-b4-449-2012
| 2,012
|
CC-BY
|
isprs/1007fd15_1c17_4745_8cac_3898268e71ec.md
|
# Seamline optimization for UAV image mosaicking using geometry of triangulated irregular network
[PERSON] and [PERSON]
Corresponding author
###### Abstract
For efficient UAV (Unmanned Aerial Vehicle) image monitoring, it is essential to mosaic multiple UAV images into one seamless image. In a mosaicked image, relief displacement of terrain is a major source of error. It is difficult to form seamlines that avoid all areas of relief displacements. A seamline determination method alone is limited in reducing the mismatch error on mosaicked image. In this study, we constructed a TIN (Triangulated Irregular Network) using tiepoints generated by rigorous bundle adjustments. We detected the regions where relief displacement occurred using the slope of TIN facets. It was found that the error of mosaicked image were mostly on TIN facets with high slopes. Our method generated a mosaicked image after elimination of error-prone region and showed that the distortions were effectively removed. This study showed that the proposed method could produce mosaicked images with stable quality using geometric clues of TIN. We expected that our method can be used for UAV image mosaicking robust to mismatching factors.
Image Mosaicking, Triangulated Irregular Network, UAV Block Adjustment, Relief Displacement, Error-prone Region. 1
## 1 Introduction
UAV (Unmanned Aerial Vehicle) image mosaicking uses multiple UAV images to produce one seamless image. In this process, relief displacement of terrain is a major source of error. Typical mosaicking techniques assign pixel values to a mosaicked image according to a continuous terrain model such as DSM (Digital Surface Model) ([PERSON] et al., 2023). As a result, errors due to relief displacement appear in the form of distortion in the mosaicked image. It is also very difficult to prepare a perfect DSM without such displacement errors.
In contrast, mosaicking techniques that do not utilized DSMs stitch images to a reference plane only using EOPs (Exterior Orientation Parameters) of images. Errors appear in the form of mismatches at the seamlines between images. In this case, it is important to determine the optimal seamlines minimizing relief displacement. Related research has mainly focused on the pattern of brightness in UAV images. The application such as optical flow ([PERSON] et al., 2018) and super-pixel ([PERSON] et al., 2020) algorithms were tried. In addition, there have been studies that apply deep learning to pixel values for optimization of seamline ([PERSON] et al., 2017; [PERSON] et al., 2020). However, seamlines passed through relief displacement regions such as building roofops, and severe mismatches occurred. The related studies indicated that it was difficult to form a seamlines that avoids all areas of relief displacements, and that the seamline determination method using only pixel values was limited in reducing the mismatch error on mosaicked image.
In this study, we constructed a TIN (Triangulated Irregular Network) using tiepoints generated by rigorous bundle adjustments ([PERSON] and [PERSON], 2022). We detected the regions where relief displacement occurred using the slope of TIN facets. We removed them to determine optimal seamlines for image mosaicking.
\begin{table}
\begin{tabular}{c|c|c c} \hline \hline \multicolumn{1}{c|}{**Specification**} & \multicolumn{1}{c}{**Dataset 1**} & \multicolumn{1}{c}{**Dataset 2**} \\ \hline \multirow{4}{*}{UAV} & Name & KD-2 Mapper & SmartOne \\ & Manufacturer & Keva Drone & Smartplanes \\ & Flight type & fixed wing & fixed wing \\ & Positioning & DGPS & DGPS \\ & sensor & DGPS & DGPS \\ & Image size & 7952 \(\times\) 3264 & 4928 \(\times\) 3264 \\ & Number of images & 60 & 58 \\ & Endlap & 70 & 70 \\ & (\%) & Sidelap & 80 & 80 \\ & Height of flight (m) & 180 & 150 \\ & GSD (m) & 0.0242 & 0.0389 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Descriptions of the dataset information
Figure 1: Appearances of the UAV usedFor our experiments, we used two datasets described in Table 1. Dataset 1 was acquired with the KD-2 Mapper UAV shown in Figure 1. The UAV is fixed-wing and applies a DGPS (Differential Global Positioning System) to obtain position information with typical accuracy. Dataset 1 is dominated by a plane with a few buildings. In this area, the UAV flew at a height of 180 meters, and the GSD (Ground Sample Distance) of the image was 2.4 centimeters. The number of images acquired is 60. Dataset 2 was acquired with a SmartOne UAV. Similar to the KD-2 Mapper UAV, this UAV is fixed-wing and uses DGPS for positioning. Dataset 2 is also dominated by plane with a few buildings. In this area, the UAV flew at a height of 150 meters, and the GSD of the image was 3.9 centimeters. The number of acquired images is SS. Using these two datasets, we tried to compare the results of the mosaic in the plane and building areas. The mosaic results in the plane area can show the overall performance of the proposed algorithm. Furthermore, the mosaic results in the building area can describe the extent to which the error is reduced by the proposed algorithm.
Figure 2 shows the flowchart of the proposed method. First, itepoints are generated for bundle adjustment. Tiepoints are determined between each pair-images and extracted by the SURF (Speeded Up Robust Features) algorithm. Then, the EOPs of the images are corrected and the model coordinates of each tipoint are calculated through block adjustment. Next, a TIN is constructed using these tiepoints. These TINs are assigned to the coverage of each image, and the slopes of each facet are calculated. By thresholding the slopes, the facets with the higher slopes are extracted, and these are selected as the final error-prone regions. Image mosaicking is then performed using the inlier facets. The details are described in the following subsections.
### Rigorous Bundle Adjustment
Block adjustment is a technique that simultaneously corrects the EOPs of all acquired UAV images and the ground coordinates of the tie points, as shown in Figure 3. Therefore, the quality of the tie points is important. Before the block adjustment is performed, the outliers in the tie points are first removed using the RANSAC (Random Sample Consensus) algorithm. A colpamarity model was applied to RANSAC, and its operation was repeated until the model accuracy was within 3 pixels. The three-dimensional ground coordinates of the tie points were then estimated along the colpamarity model. By re-projecting these ground coordinates onto another image, the distance difference between the original and projected image coordinates was determined as shown in Figure 4. The projection from ground coordinates to image coordinates is calculated according to the collinearity model as Equation (1). In this study, tie points with a reprojection error of more than 3 pixels were removed.
\[\begin{split} x_{n}&=-f\frac{r_{11}(X_{n}-T_{x})+r_{ 12}(Y_{n}-T_{y})+r_{13}(Z_{n}-T_{x})}{r_{31}(X_{n}-T_{x})+r_{32}(Y_{n}-T_{y})+ r_{33}(Z_{n}-T_{x})}\\ y_{n}&=-f\frac{r_{12}(X_{n}-T_{x})+r_{22}(Y_{n}-T_{y})+ r_{23}(Z_{n}-T_{x})}{r_{31}(X_{n}-T_{x})+r_{32}(Y_{n}-T_{y})+r_{33}(Z_{n}-T_{x})} \end{split} \tag{1}\]
where \(X_{n}\), \(Y_{n}\), \(Z_{n}=n\)th object coordinates in the model coordinate system
\(X_{n}\), \(Y_{n}=n\)th object coordinates in the image coordinate system
\(r_{11\,to\,33}=\)rotation elements for EOP
\(T_{x}\), \(T_{y}\), \(T_{x}=\)translation elements for EOP
\(f=\) focal length
\(n=1\) to the number of features
Figure 4: reprojection error verification on our study
Figure 3: UAV photogrammetric block adjustment
Figure 2: Flowchart of the proposed method
The collinearity condition as in equation (1) was adopted as the block adjustment model in this study. The block adjustment was applied to recursive LSE (Least Square Estimation), where weights and constraints were used. This was constructed as shown in Equation (2). In this experiment, the initial weights were set as shown in Table 2, considering the measurement error range. Then, as the iterative block adjustment proceeded, the covariance matrix was calculated from the residuals. Based on covariance matrix calculated, the weights were automatically updated.
\[W\bar{B}\Delta=W\bar{C}+\bar{V}\]
\[\begin{pmatrix}W&0&0\\ 0&\dot{W}&0\\ 0&0&\dot{W}\end{pmatrix}\begin{pmatrix}\bar{B}&\bar{B}\\ 1&0\\ 0&1\end{pmatrix}\begin{pmatrix}\bar{A}\\ \bar{A}\\ 0&1\end{pmatrix}\begin{pmatrix}\bar{A}\\ \bar{A}\end{pmatrix} \tag{2}\]
\[\begin{array}{ll}&=\begin{pmatrix}W&0&0\\ 0&\dot{W}\end{pmatrix}\begin{pmatrix}\bar{C}\\ \bar{C}\\ \bar{C}\end{pmatrix}+\begin{pmatrix}\bar{V}\\ \bar{V}\end{pmatrix}\end{array}\]
where \(W\), \(\dot{W}\), \(\bar{W}=\) weights of increments for EOP and ground coordinates of tie points \(\bar{B}\) = coefficients of partial differential equations for EOP and ground coordinates of tie points in collinearity conditions \(\bar{I}=\) coefficients of identity matrix \(\bar{A}\), \(\bar{A}=\) increments for EOP and ground coordinates of tie points \(\bar{e}=\) differences between observed and initial values for collinearity equations \(\bar{C}\), \(\bar{C}=\) differences between observed and initial values for EOP and ground coordinates of tie points \(\bar{v}=\) residuals for collinearity equations \(\bar{V}\), \(\bar{V}=\) residuals for EOP, ground coordinates of tie points
\begin{tabular}{c|c} \hline \hline
**Parameters** & **Initial weight** \\ \hline Model (pixel) & 1.0 \\ \hline EOP’s rotation (degree) & 10.0 \\ \hline EOP’s translation (m) & 1.0 \\ \hline \end{tabular}
Table 2: Initial weights for block adjustment
### TIN Generation and Assignment
After bundle adjustment, tiepoints have corrected ground coordinates. In this paper, these points are defined as rapid point cloud. This rapid point cloud can describe the terrain of the target area. Therefore, it is the key to the mosaic generation in the proposed method. In most cases, the number of tiepoints within this point cloud is excessive. They need to be sampled to reduce the computational complexity. In this study, the rapid point cloud was sampled at 5 meters intervals in the model space.
A TIN in the model space is formed by the rapid point clouds, as shown in Figure 5. It is based on the Delaunay triangulation algorithm, which is available in the OpenCV library. The TINs are then projected onto a reference plane to generate a mosaicked image, and the range of the TINs is formed as the range of the mosaicked image.
### TIN Facet Mosaicking
As shown in the introduction section, relief displacement is a major source of error in mosaicked images. This study aims to detect these errors based on the slopes of each TIN's facet. These slopes are calculated as shown in Equations (3) and (4).
\[\begin{split}\bar{\bar{n}}=\begin{bmatrix}n_{1}\\ n_{2}\\ n_{3}\end{bmatrix}=\overline{P_{1}P_{2}}\times\overline{P_{1}P_{3}}\\ \theta=\frac{\pi}{2}-\cos^{-1}\left(\frac{n_{1}^{2}+n_{2}^{2}}{ \sqrt{n_{1}^{2}+n_{2}^{2}+n_{3}^{2}}/n_{1}^{2}+n_{2}^{2}}\right)\end{split} \tag{3}\]
where \(n_{1}\), \(n_{2}\), \(n_{3}=\) components of normal vector \(p_{1}\), \(P_{2}\), \(P_{3}=\) three random points in a vector space \(\theta=\) slope of a plane with three points \(P_{1}\), \(P_{2}\), \(P_{3}\)
The slope becomes larger in areas where there are tall objects such as buildings. Relief displacements are also likely to occur. Therefore, facets with slopes above a certain angle could be defined as error-prone regions. In this study, we checked the error-prone region extraction results for three different slopes: 30, 40, and 60 degrees.
Figure 5: Concept of TIN generation and assignment
Figure 6: Concept of TIN facet’s slope
Figure 7: Concept of mosaicking using TIN facet
\[\begin{bmatrix}x^{\prime}\\ y^{\prime}\\ 1\end{bmatrix}=\begin{bmatrix}r_{1}&r_{2}&t_{1}\\ r_{3}&t_{4}&t_{2}\\ 0&0&1\end{bmatrix}\begin{bmatrix}x\\ y\\ 1\end{bmatrix} \tag{5}\]
where \(x,y=\) image coordinate of an original point \(x^{\prime},y^{\prime}=\) image coordinate of a transformed point \(r_{i}=\) rotation coefficients on affine model \(t_{j}=\) translation coefficients on affine model
After the transformation relationship is estimated, the image facets are stitched into the mosaic image through image warping. The process is repeated for all the facets of the TIN, generating a mosaic image. Finally, the entire image patches are stitched to the error-prone regions on mosaic image.
## 3 Results and Discussion
Tables 3 and 4 describe the block adjustment results for Datasets 1 and 2. For Dataset 1, 240794 tie points were extracted and used for block adjustment. In the estimation, the sigma not of the EOP was about 0.001. This mean that the estimated model was stable. After correcting the EOP, the model error was 1.2974 pixels, and the Y-parallax was 1.0165 pixels. From these accuracies, we could confirm that the relative error in the model space is small. For Dataset 2, 160949 tie points were used for block adjustment. Similar to Dataset 1, the sigma not of the EOP was found to be around 0.001. The model error was 1.2558 pixels, and the Y-parallax was 0.9263 pixels.
Tables 5 and 6 show the results for TIN facet mosaicking. For Dataset 1, 85181 rapid point clouds were extracted from 240794 tie points by re-projection error verification. Then, through sampling at 5 meters intervals, 6655 points were selected to be used for mosaicking. The TIN generated from this final rapid point cloud consisted of 13272 facets, as shown in Figure 8. For Dataset 2, 39984 rapid point clouds were extracted, and 4044 points were determined to be used for mosaicing. The generated TIN consisted of 7688 facets, as shown in Figure 9.
Figure 12: Relief displacement regions detected for each slope threshold from 30 to 60 degrees for dataset 1
Figure 10: Initial mosaicked image for dataset 1
Figure 13: Final mosaicked image of proposed method for dataset 1
Figure 16: Relief displacement regions detected for each slope threshold from 30 to 60 degrees for dataset 2
Figure 17: Final mosaicked image of proposed method for dataset 2
Figure 14: Initial mosaicked image for dataset 2
Figure 15: Slopes map for the TIN for dataset 2
Figure 10 shows the initial mosaicked image and slope map for Dataset 1. The Facets with high slopes were located on trees, buildings, etc. The mosaic image was distorted within the region of those facets. The results for Dataset 2 in Figure 14 are similar to the results for Dataset 1. The results of thresholding for slopes are shown in Figures 12 and 16. As we increased the threshold value from 30 to 60, only facets around relatively high building areas were extracted. Therefore, by comparing the initial mosaic results with the error-prone regions extraction results, we determined the optimal slope threshold value. In this experiment, we determined 45 degrees as the optimal threshold value. Figures 13 and 17 show the final mosaic results after removing the error-prone regions. The error-prone regions in the initial mosaic image were successfully removed in the final mosaic image.
## 4 Conclusions
In this study, we utilized TINs for UAV image mosaicking. We aimed to verify a TIN can be utilized to mosaic UAV images without a DSM when the TIN was constructed from multiple image points generated through rigorous bundle adjustment. We also tried to reduce the mismatching error at the junction areas caused by relief displacements. An area with several buildings on a flat surface was selected as the target area, and the images were taken using a fixed-wing UAV.
We first generated a mosaicked image using the initial seamline constructed with a TIN. Errors caused by relief displacement appeared in the form of mismatches at the seamlines. When compared to the slope of TIN facets, severe distortions were mostly on facets with high slopes. The detected relief displacement regions for various slope thresholds showed that buildings at various sizes could be detected. Finally, the mosaicked image generated after elimination of error-prone region showed that seamlines and mismatches due to buildings were removed.
This study showed that the proposed method could produce mosaicked images with stable quality using geometric clues of a TIN. We expected that our method can be used for UAV image mosaicking robust to mismatching factors.
## Acknowledgements
This study was carried out with the support of \"Cooperative Research Program for Agriculture Science and Technology Development (Project No. PJ0162332022)\" Rural Development Administration, Republic of Korea.
## References
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON], [PERSON], 2017. Optimal seamline detection for orthoimage mosaicking by combining deep convolutional neural network and graph cuts. _Remote Sensing_, 9(7), 701.
* [PERSON] and [PERSON] (2022) [PERSON], [PERSON], [PERSON], [PERSON], 2022. TIN-based Robust Mosaicking of UAV images with consecutive image connection. _International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences_.
* [PERSON] et al. (2020a) [PERSON], [PERSON], [PERSON], and [PERSON], 2020a. Superpixel-based seamless image stitching for UAV images. _IEEE transactions on geoscience and remote sensing_, 59(2), 1565-1576.
* [PERSON] et al. (2020b) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON], 2020b. Automatic seamline determination for urban image mosaicking based on road probability map from the D-LinkNet neural network. _Sensors_, 20(7), 1832.
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON], [PERSON], 2018. Improved seam-line searching algorithm for UAV image mosaic with optical flow. _Sensors_, 18(4), 1214.
* [PERSON] et al. (2023) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON], [PERSON], 2023. Aerial orthoimage generation for UAV remote sensing. _Information Fusion_, 89, 91-120.
|
isprs
|
SEAMLINE OPTIMIZATION FOR UAV IMAGE MOSAICKING USING GEOMETRY OF TRIANGULATED IRREGULAR NETWORK
|
S. Yoon, T. Kim
|
https://doi.org/10.5194/isprs-archives-xlviii-1-w2-2023-1915-2023
| 2,023
|
CC-BY
|
isprs/5755c44a_0b1c_4ce7_b208_ebb5047b6842.md
|
# The European Research Infrastructure for Heritage Science (E-RIIHS)
[PERSON]
[PERSON]
1 National Institute of Optics, National Research Council, Largo Fermi 6, 50125 Florence, Italy
[EMAIL_ADDRESS] [EMAIL_ADDRESS]
###### Abstract
The European Research Infrastructure for Heritage Science (E-RIIHS) entered the European strategic roadmap for research infrastructures (ESFRI Roadmap [1]) in 2016, as one of its six new projects. E-RIIHS supports research on heritage interpretation, preservation, documentation and management. Both cultural and natural heritage are addressed: collections, artworks, buildings, monuments and archaeological sites. E-RIIHS aims to become a distributed research infrastructure with a multi-level star-structure: facilities from single Countries will be organized in national nodes, coordinated by National Hubs. The E-RIIHS Central Hub will provide the unique access point to all E-RIIHS services through coordination of National Hubs. E-RIIHS activities already started in some of its national nodes. In Italy the access to some E-RIIHS services started in 2015. A case study concerning the diagnostic of a hypogea cave is presented.
Research Informatics, Heritage Science, MOLAB +
Footnote †: Corresponding author
## 1 Introduction
Tangible cultural and natural heritage are key components of the European identity. The study and preservation of cultural and natural heritage is a global challenge for science and the European society. The European Research Infrastructure for Heritage Science (E-RIIHS) supports research on heritage interpretation, preservation, documentation and management. Cross-disciplinary groups of researchers will provide state-of-the-art tools and services to cross-disciplinary users and scientific communities working to advance knowledge about heritage and to devise innovative strategies for its preservation. E-RIIHS connects researchers in the humanities and natural sciences and fosters a trans-disciplinary culture of exchange and cooperation. E-RIIHS pursues the integration of European excellent facilities to create a cohesive entity playing a connecting role in the global community of heritage science.
### Access platforms
E-RIIHS will provide state-of-the-art tools and services to cross-disciplinary research communities of users through its four access platforms:
1. MOLAB: access to advanced mobile analytical instrumentation for diagnostics of heritage objects, archaeological sites and historical monuments. The MOBIE LABoratories will allow its users to carry out complex multi-technique diagnostic projects, allowing effective in situ investigation.
2. FIXLAB: access to large-scale and specific facilities with unique expertise in heritage science, for cutting-edge scientific investigation on samples or whole objects, revealing micro-structures and chemical composition, giving essential and invaluable insights into historical technologies, materials and species, their context, chronologies, alteration and degradation phenomena.
3. ARCHLAB: physical access to archives and collections of prestigious European museums, galleries, research institutions and universities containing non-digital samples and specimens and organized scientific information.
4. DIGILAB: virtual access to tools and data hubs for heritage research - including measurement results, analytical data and documentation - from large academic as well as research and heritage institutions.
## 2 E-RIIHS Preparatory Phase
E-RIIHS is focused on the preservation of the World's Heritage by enabling cutting-edge research in heritage science, liaising with governments and heritage institutions to promote its constant development and, finally, raising the appreciation of the large public for cultural and natural heritage and the recognition of its historic, social and economic significance. The E-RIIHS's goal is also the promotion and harmonization of joint research activities in heritage science (HS), provision of advanced training to HS researchers and students, dissemination and exploitation of research results, and contribution to knowledge transfer and researchers mobility.
The proposal for establishing a GRI [Global Research Infrastructure] based on the E-RIIHS partnership was submitted to the GSO [2] by Italy in 2014. An international initiative will be carried on in parallel with the preparatory phase of E-RIIHS for connecting and including partners and facilities outside EU, gradually reaching the status of a global distributed research infrastructure of which E-RIIHS could be the leading scientific partner. E-RIIHS is collaborating on this roadmap with the intergovernmental organization ICCROM [3].
E-RHIS started, in February 2017, its preparatory phase supported by the EU project E-RHIS PP (H2020-INFRADEV-02-2016). E-RHIS is a pan-European distributed infrastructure supported by 15 Member States plus Israel and participated by six more EU and associated countries.
E-RHIS star-design structure has its Central Hub and headquarters in Florence (IT) and comprises National Hubs - possibly organised in Regional Hubs in some countries - encompassing specialised knowledge, fixed and mobile national facilities of recognized excellence, physically accessible collections/archives and remotely accessible heritage data. Representatives of twenty one Countries plus three international organizations are now working together to prepare E-RHIS to be launched as a standalone European Research Infrastructure Consortium (ERIC) in 2021.
## 3 E-RHIS Italian Node
### National activities
The Italian hub, E-RHIS.it, is currently structuring the governance and coordination of the E-RHIS national node. Up to now, the E-RHIS access services have been operated at the national level thanks to the E-RHIS.it project under the coordination of National Research Council (CNR), receiving an average \(\epsilon\) 400,000 annual funding from the Ministry of Education, University and Research (MIUR). The first Pilot call for the access to mobile national laboratories of E-RHIS.it was launched in 2015. Following an international peer review process, seven access projects were selected and carried out.
### Project DiaCavHE
In the competitive selection procedure, the project DiaCavHE (Save Rocks and Colours: A Code of Conduct for a \"Context Diagnostic\" of Cave Heritage) lead by [PERSON] (University of Salento) has been approved. The ultimate goal of this research project has been to define a model diagnostic approach to hypopegan churches, which are rather common especially in defined areas of southern Italy. The cave church Sant'Angelo in Casalarotto (Motlola-Taranto, Italy), with extensive wall paintings has been the object of the study [4-7]. The MOBIE LABoratory (MOLAB) has been called for the characterization of the painting technique, of pigments composition, of the state of conservation of the cave church and, finally, for its 3D modelling. The church of Sant'Angelo (Fig. 1) is an unique object, being the only rock church on the whole national territory to be structured on two levels (Fig. 2), with the upper church serving the liturgy and the lower one used for funerary function, as evidenced by the presence of single deposited medieval tombs (Fig. 3). From a planimetric point of view, the lower church, smaller than the upper one, is more organic, conceived according to a unitary architectural criterion and in a narrower temporal space with three naves are divided by two monolithic pillars. The space was probably also used for worship, but mainly served as funerary crypt, supported by the indisputable testimony of tombs dug and lined up on the floor.
In this hypogean context, several Institutes of the National Research Council were involved to intervene with state-of-the-art MOLAB techniques: X-ray fluorescence (Institute of Molecular Science and Technologies, ISTM) for identification of pigments; Ion Chromatography (Institute of Archaeological and Monumental Heritage, IBAM) for analysis of soluble salts and portable microscope for ultraviolet-induced multispectral fluorescence (developed by National Institute of Optics, INO) to map presence of patinas of organic origin (lichens and fungi) (Fig. 4).
The Time of Flight Laser Scanning (Institute of Information Science and Technologies, ISTI) served for the reconstruction of the 3D model (Fig. 5). To scan the two planes of the church, 16 scans were obtained on the upper floor and 9 scans on the lower floor, each single scan containing approximately 6.5 million 3D points. It was necessary to resume the structure from different observation points in order to complete sampling of the inner surface. The high number of shooting positions is due to the particular architectural structure of the church, where the presence of columns, pillars and arches inside spaces and niches has made it extremely difficult to select an optimal set of shooting positions. In addition, 4 scans were also made of the outside of the entrance in order to have data also on the external entry area of the artefact.
The model thus obtained has been simplified (controlled reduction of geometric complexity) to produce a version of 20 million triangles, on which the colour obtained from 170 photos (each 24 Mpix) was mapped (Fig. 7). Among the frescoes present, three were selected, for which high resolution images were used.
The phenomenon of the rocky civilization assumes, in medieval era, a role of great relevance in artistic expression of southern Italy. The results provided by the access to E-RHIS.it MOLAB facilities lead to conclusions that the hypogeum presents two different conservative profiles, referable to the architectural structure and to parietal decorations. The first comnotes for a widespread shallow cracking of the rock that however currently does not affect the static the building, neither signals of possible rock clips that would compromise the object usability. The performed analyses contributed to the understanding and mainly mapping of some degradation phenomena and mechanisms providing at the same time 3D models, which will be made available on the web through the 3 DHop tool developed by the Visual Computing Lab of ISTI for the remote fruition of the artefact.
## 4 Conclusion
E-RHIS access to mobile instrumentation through its MOLAB platform, currently accessible for researchers by applying to the EU integrating activity project IPERION CH (www.iperionch.eu), provides unique tools for gathering new knowledge on heritage data where inaccessible or immovable heritage is to be studied. Along with the analytic measurements from the FIXLAB platform, the acquired new knowledge will be made usable in the DIGILAB, where it will be stored for documentation and for future research needs.
## Acknowledgements
We thank [PERSON] for allowing the use of the iconographic materials of project DiaCavHe. Thanks to the Visual Computing Group of ISTI-CNR, lead by [PERSON], for the use of the images of the 3D model.
## References
* [1] The Roadmap ESFRI--European Strategy Forum on Research Infrastructures is a long-term plan adopted by the European Commission listing the research infrastructures which are considered to be strategic for the development of European research in all scientific domains.
Figure 4: Mapping of organic contaminants with portable UV multispectral fluorescence microscope.
Figure 5: The global 3D model obtained after the fusion phase of single shots (complexity of 50 M triangles, model density about 1 point per 5 mm).
Figure 6: 3D model with mapped colour for the fruition of the church interior.
[2] The GSO (Group of Senior Officials) was instituted by the Carnegie Group of G7 Science Advisory. In 2013 the GSO proposed a Framework for Global Research Infrastructures. It is now composed by government representatives from the G7+5 Countries.
[3] The International Centre for the Study of the Preservation and Restoration of Cultural Property, ICROM ([[http://www.iccrom.org](http://www.iccrom.org)]([http://www.iccrom.org](http://www.iccrom.org))) is a intergovernmental organization created by UNESCO in 1956. ICROM headquarters are in Rome.
[4] Le are rupestri del'Italia cenct-meridionale nell'ambito delle civilita italiche: conoscenza, salvaguardia, tutela, Atti del IV Convegno internazionale sulla civilita rupestre (Savelletri di Fasano, 2009), Spoleto, 2011;
[5] [PERSON], Analisi del degrado delle pitture rupestri in grotta, in Atti del Convegno internazionale sulla civilita rupestre \"Quando abitavamo in grotta\" (Savelletri di Fasano, 2003), Spoleto, 2004, pp. 61 - 82;
[6] [PERSON], [PERSON], Le chiese rupestri di Puglia e Basilicata, Bari 1998, p. 222
[7] [PERSON], Pittura monumentale bizantina in Puglia, Milano 1991
|
isprs
|
THE EUROPEAN RESEARCH INFRASTRUCTURE FOR HERITAGE SCIENCE (ERIHS)
|
J. Striova, L. Pezzati
|
https://doi.org/10.5194/isprs-archives-xlii-2-w5-661-2017
| 2,017
|
CC-BY
|
isprs/e3178d4f_47b1_4a71_b729_695af9af1717.md
|
# Production of Landsat 7 Orthoimages Covering the Canadian Landmass
David Belanger
Centre for Topographic Information in Sherbrooke
2144 King Street West
Sherbrooke
Canada
###### Abstract
Despite the significant among of geospatial data around the country, spatially integrating them can prove difficult due to the range of accuracies. Currently, there is no single source that offers accurate, consistent data for Canada as a whole. A project is now underway, however, that could give elements of solution for the integration issue. In fact, production of Landsat 7 orthoimages got started in fall 2000 and is expected to yield complete coverage of the country by 2004.
The Centre for Topographic Information in Sherbrooke (CTIS), part of Natural Resources Canada, is responsible for management, development, and production of the project. In order to reduce costs and use the most accurate control points possible, CTIS entered into a number of partnerships with various public-sector agencies in the field of geomatics. In order to obtain the highest image accuracy, a rigorous mathematical model and uniform methodology in selecting the control points are used. Contracts for delimiting entities to serve as control points for the images were awarded to geomatics firms.
DEM, Landsat 7 satellite, orthoimage, projection, geospatial data, aerotriangulation, partnership, ground control points
## 1 Introduction
Sources for updating and acquiring new topographic data have always been very expensive and hard to find. Moreover, the broad diversity of geospatial data in the country and the concomitant differences in accuracy, vertical integration of such data has always created headaches for users. The launching of the Landsat 7 satellite in 1999 has given CTIS an opportunity to remedy these problems in a consistent, economical manner.
Image resolution (15 m for the panchromatic band), cost, and flexibility with respect to user and distribution rights make Landsat 7 a very attractive data source for a number of producers and users of geospatial data. As a result, CTIS signed partnership agreements with the main stakeholders in the field of geomatics in Canada (federal, provincial, and territorial levels) to produce Landsat 7 orthoimage coverage of the entire Canadian landmass.
Production of the orthoimages started in fall 2000 and is scheduled to run for a period of three to four years, depending on the weather, since the images used must have the least amount of cloud coverage possible. This article describes the responsibilities of the partners in the project, the process developed to implement national production of orthoimages, the accuracy achieved for the images already produced, and the Geomatics Canada products derived from the orthoimages.
## 2 Partners in a Win-Win Project
In 1999, a cooperation agreement between the various departments of the federal and provincial governments was signed in order to produce Canadian Landsat 7 orthoimage coverage that is accurate and reasonable in cost. This agreement provides access to the country's most accurate topographic data that can be used to carry out geometric correction of the images.
Symposium on Geospatial Theory, Processing and Applications,
Symposium sur la theorie, les traitements et les applications des donnees Geospatiales, Ottawa 2002 Most partners contribute financially in acquiring and orthorectifying the images. To date, 14 federal-government organisations as well as all the provinces and territories of the Canada have joined in this partnership agreement.
As a result of this agreement, it was agreed that accurate provincial and/or federal data (National Topographic Data Base (NTBD) and road network GPS) would serve as ground control points for the south of the country, while federal-government aerotriangulation blocks would be used for the north. The digital elevation model (DEM) used in correcting the images can comprise a combination of provincial data (depending on availability) completed with Canadian Digital Elevation Data (CDED, digital elevation data derived from 50K and 250K contours).
While the Canada Centre for Remote Sensing provides management for the stations receiving the raw images, the responsibility for development, production management, and even product delivery has been given to CTIS.
The GeoConnections program, through its framework data component, contributes to project funding. Indeed, one of the program objectives is to fund part of the projects making it possible to establish a national infrastructure, which would facilitate data integration. This integration will provide for better analysis of the information and simplify development of new products. Since the orthoimages produced will cover all of Canada, they could be used as a common base for many partners, which would facilitate integration of geospatial data.
The orthoimages produced in partnership comprise 9 spectral bands: a panchromatic band with a pixel size of 15 metres, 6 multispectral bands with a pixel size of 30 metres, and 2 thermal bands with a pixel size of 60 metres. The orthoimages are based on the NAD83 coordinate reference system and are provided in several projections depending on partner needs. The projections include the universal transverse Mercator (UTM), the Lambert conformal conic projection (LCC), and the Albers equal area map projection. Images produced with the UTM projection that overlap two 6-degree UTM zones are generated in both zones.
## 3 A National Context for Producing Ortholimages
Accuracy is one of the unavoidable criteria for this production project. In order to obtain the highest image accuracy, a rigorous mathematical model and uniform methodology in selecting the control points are used, in addition to accurate control data. Given the number of images involved (700) and the length of the project (5 years), several steps must be automated in order to reduce quality-control costs. This section discusses the main features of the process put in place by CTIS to ensure the highest accuracy for the national orthoimages. It presents, among other things, the sources used, the mathematical model, the methodology underlying the selection of control points, outsourcing, the quality-control process, and the orthoimage generation process.
### Sources of Control Data
A variety of sources are used for image correction; the most accurate are given priority. The usual priority ranking is: GPS source for the National Road Network, provincial vector data, accurate NTDB data (10 m to 90 %), federal-government aerotriangulation data, and other sources.
### Mathematical Model
The mathematical model used to orthorectify Landsat 7 images was developed by Dr. [PERSON] with the Canada Centre for Remote Sensing. It was implemented and marketed by PCI Geomatics. The decision to use this mathematical model was based on the fact that it provides a rigorous geometric correction method that takes into account all distortions. The model is based on principles such as orbitography, photogrammetry, geodetics, and cartography ([PERSON] et al., 2000).
### Methodology for Selecting Ground Control Points
Giving the national scope of this project and Canada's geographic situation, a rigorous methodology that could be used for the entire country had to be developed. Consequently, tests were carried out on images taken of different types of relief before work began on developing the inspection and image correction processes. Analysis of the entity types visible in the image and available in the sources was made possible by establishing uniform selection criteria for the entire country. Indeed, these tests made it possible to assess the number of control sectors per image, the number of ground control points required per sector, the accuracy of the types of entities selected, and the impact of the ground control points located in the sectors of minimum and maximum elevation. The rules needed to ensure good accuracy when creating image mosaics were also verified.
Among other things, testing confirmed that ground control points evenly distributed in the six base sectors around the perimeter of the image as well as in the sectors of minimum and maximum elevation yielded the highest accuracy when correcting the full satellite scene ([PERSON] et al., 1996). A sector covers approximately one National Topographic System (NTS) tile at the 1:50 000 scale or a photogrammetric model when the source is federal-government aerotriangulation data. Since the various Landsat 7 satellite scenes are superimposed (30% to 80% in Canada), the theoretical superimposition of adjacent images is taken into consideration. Therefore, when an image is processed, the base sectors of adjacent images located in the superimposition zone are added. These additional sectors do not improve accuracy but ensure that the same ground control points are used in the overlapped areas. This makes it possible to maintain coherency in correcting adjacent images and improves the quality when creating image mosaics. The method requires three points per sector in order to ensure an excess of points for final selection.
Testing also demonstrated that the centres of mass of lakes/islands and road intersections are the two types of control points that yield the best results. In order to prevent inaccuracies, these ground control points must comply with certain well-defined rules. For example, entities corresponding to lakes and islands must not be selected if the expanse of water is not stable. Indeed, these entities must not be selected if a dam is located nearby or if there are adjacent wetlands or sand. Since the water level can vary over time, these situations can affect the quality of control point accuracy. The entities are selected for stability over time so that they can be reused for geometric correction of images in the future.
Outcome of these test made it possible to establish criteria about ground control point selection. These criteria served as the basis for outsourcing specifications and for developing a uniform methodology for producing orthoimages for the entire country.
### Awarding of Contracts to Geomatics Firms
The work for delimiting the entities used for image orthorectification is carried out to Canadian geomatics firms. Approximately 700 Landsat 7 images are needed to provide coverage of Canada. Blocks of 20 to 40 images are contracted out through requests for proposals (RFP). Bidders must demonstrate that their processes are able to meet the technical specifications developed and issued by CTIS. Planning the areas for collecting control entities; selecting or capturing control entities and image entities, such as lakes or road intersections; collecting metadata; and selecting the most accurate sources are examples of what is required.
Two types of contracts are open to firms depending on whether the control-data source is vector or federal-government aerotriangulation data. When vector data sources are used, it is only necessary to capture the control points on the raw image. On the other hand, when federal-government aerotriangulation data is used, the contractor must extract the same entities from both the raw image and, using the photogrammetric process, from the aerial photographs used in producing the federal-government aerotriangulation data.
CTIS, for its part, is responsible for controlling data capture quality, calculating road intersections and the centres of mass of lakes and islands, final selection of ground control points, and geometric correction of the images.
### Quality Control
In order to minimise the time and resources associated with quality control, CTIS has put in place a computerised quality-control process that consists of automated and semiautomated data processing (see Figure 1). The automatic quality-control process verifies the structure of the entities provided and its compliance with the related metadata. Contractors deliver the data directly to an FTP site. During delivery, automated inspection is carried out, including computation of a mathematical correction model. The process sends an acceptance or rejection report to the contractor by e-mail with no manual intervention. A bonus/penalty system provides incentive to contractors to strive for quality. Once the structure has been accepted, the semiautomated process is initiated. This process verifies if the choice of entities and if the data capture quality with respect to the source, complies with specifications. A 3-person team participate to the quality control of 250 orthoimages/years. Given the excess of points required in each area, bad points can be eliminated, reducing returns to the contractor.
### Generating Orthoimages
After quality control, another completely automated process is executed. This process calculates the final mathematical model for geometric correction, generates a digital elevation model (merge of various DEMs) for the complete satellite scene, generates the orthoimage metadata, and produces the orthoimages in the projections requested by partners (see Figure 1). The burning of CDs is after produced by contract. The control entities and image entities used to correct the images are stored in a spatial database (Oracle). This database makes it possible to reuse the entities for correcting more recent images from the same area.
### Data Analysis
The data analysis is performed by the following steps:
1. [label=]
2. _The data analysis is performed by the following steps:_
3. _The data analysis is performed by the following steps:_
4. _The data analysis is performed by the following steps:_
5. _The data analysis is performed by the following steps:_
6. _The data analysis is performed by the following steps:_
7. _The data analysis is
Many images from across Canada have already been processed in this way and delivered to partners (see Figure 2). In addition, many contracts are now underway. We hope to have complete coverage of Canada with Landsat 7 orthoimages by 2004.
## 4 Orthoimage Accuracy
A number of parameters are analysed in order to obtain orthoimage accuracy with an acceptable level of confidence. These parameters include the mean square error (MSE) along the x-axis and y-axis of the ground control points after the correction model has been generated, the accuracy of the image control data, and the altimetric and planimetric accuracy of the digital elevation model (DEMs) covering the satellites scene. In order to determine the impact of the DEM on orthoimage accuracy, a slope analysis is carried out for the entire satellite scene using Canadian Digital Elevation Data (CDED) at the 1:250 000 scale. The assessment of this impact takes into account the maximum slope of the terrain and the maximum angle of incidence of the sensor. Once all the errors have been combined, the accuracy is calculated to a level of confidence of 90% (Circular Map Accuracy Standard (CMAS) (CCSM, 1984)).
More than 93% of the orthoimages generated to date have an accuracy of less than 20 m (see Figure 3 and Table 1) with a level of confidence of 90% (Circular Map Accuracy Standard (CMAS)). The production of Landsat 7 orthoimages will make it possible to have the most accurate and consistent source for complete coverage of the country.
Accuracy to the nearest metre of orthoimages by province or territory according to level of confidence as of March 26, 2002.
\begin{tabular}{|l|l|l|l|} \hline
**Province or Territory** & **Number of** & **Average MSE** & **Average CMAS (metres)** \\ & **Images** & **(metres)** & \\ \hline Newfoundland & 0 & -- & -- \\ \hline Nova Scotia & 1 & 12 & 18 \\ \hline Prince Edward Island & 1 & 12 & 18 \\ \hline New Brunswick & 2 & 14 & 21 \\ \hline Quebec & 5 & 9 & 13 \\ \hline Ontario & 41 & 10 & 15 \\ \hline Manitoba & 22 & 11 & 16 \\ \hline Saskatchewan & 26 & 10 & 15 \\ \hline Alberta & 25 & 10 & 15 \\ \hline British Columbia & 3 & 14 & 21 \\ \hline Yukon & 0 & -- & -- \\ \hline Northwest Territories & 22 & 9 & 14 \\ \hline Nunavut & 43 & 9 & 14 \\ \hline \end{tabular}
Mean square error (MSE) combined in X and Y corresponds to accuracy with a level of confidence of 63.21%. The CMAS is more used in cartography. It corresponds to the MSE circular multiplied by a factor of 1.5174 (CCSM, 1984). This
\begin{table}
\begin{tabular}{|l|l|l|l|} \hline
**Province or Territory** & **Number of** & **Average MSE** & **Average CMAS (metres)** \\ & **Images** & **(metres)** & \\ \hline Newfoundland & 0 & -- & -- \\ \hline Nova Scotia & 1 & 12 & 18 \\ \hline Prince Edward Island & 1 & 12 & 18 \\ \hline New Brunswick & 2 & 14 & 21 \\ \hline Quebec & 5 & 9 & 13 \\ \hline Ontario & 41 & 10 & 15 \\ \hline Manitoba & 22 & 11 & 16 \\ \hline Saskatchewan & 26 & 10 & 15 \\ \hline Alberta & 25 & 10 & 15 \\ \hline British Columbia & 3 & 14 & 21 \\ \hline Yukon & 0 & -- & -- \\ \hline Northwest Territories & 22 & 9 & 14 \\ \hline Nunavut & 43 & 9 & 14 \\ \hline \end{tabular}
\end{table}
Table 1: Accuracy to the nearest metre of orthoimages by province or territory according to level of confidence as of March 26, 2002.
Figure 3: Position and accuracy (to 90%) of images orthorectified as of March 26, 2002. Figure 3 illustrates the accuracy of corrected images with an interval of 5 m, each corresponding to a different color (hue).
represents a circular accuracy with a level of confidence of 90% and is used in assessing the orthoimage accuracy. An image may be located in more than one province or territory.
## 5 New Products from Orthoimages
Each of the partners can create their own products derived from the Landsat 7 orthoimages. Geomatics Canada intends to use orthoimages for many different applications; a number of products have already been developed.
### _Geospatial Database_
CTIS uses the Landsat 7 orthoimages for updating vector data themes such as the hydrographic network, built-up areas, vegetation, and designated areas. Other themes that cannot be updated by the Landsat 7 source can come from partners (e.g. rail network, airport) or by contracting out (e.g. GPS road network). These will enable CTIS to offer customers a new range of digital topographic products, which will be more accurate and up-to-date than the existing National Topographic Data Base (NTDB).
### _CanImage_
A new Landsat 7 orthoimage product based on National Topographic System (NTS) divisions at the 1:50 000 scale, called CanImage, has been made available from CTIS since March 2002 for just $25.00. It will be distributed through the CTIS online purchasing and subscription site at [[http://www.ctis.nrcan.gc.ca](http://www.ctis.nrcan.gc.ca)]([http://www.ctis.nrcan.gc.ca](http://www.ctis.nrcan.gc.ca)). CanImage, offered in the GeoTIFF (RGB) format, will enable users to select a combination of three bands, enhancement, and coordinate system (UTM or geographic) for the orthoimage. The product is also available without enhancement for more specific needs. Since each band represents 8-bit radiometry (values from 0 to 255), the combination of the three bands yields 24-bit RGB (or red, green, and blue) color composition.
All spectral bands available for CanImage have a pixel size of 15 metres. In fact, the high-resolution panchromatic band (15 metres) can be merged with Landsat 7's seven multispectral bands (originally 30 metres). As the result of this technique integrated into PCI Geomatics software, the resolution for multispectral bands 1, 2, 3, 4, 5, and 7 will change from 30 metres to 15 metres. This technique makes it possible to enhance image details when viewing with very little modification of the distinctive spectral characteristics of each band. When it is impossible to entirely cover a data set with a single orthoimage, a mosaic of orthoimages is produced by adjusting the frequency histogram.
CanImage GeoTIFF files can be opened in different applications in response to specific needs: geographic information systems (GIS) for land-management applications, mapping software to visually represent a region, remote-sensing and digital image processing software, and many others.
### _Toporama_
Toporama, in existence since 1999, is an Internet window on different types of topographic information (products) available from CTIS. It offers free data as raster images (GIF files). It gives a multitude of users from across the world--from computer novices to experts in the fields of geomatics and cartography--access to the site and its images. A Toporama orthoimage is therefore a low-resolution raster representation of the CanImage product. Orthoimages will be made available by NTS number on the Toporama site according to the CTIS production program. Toporama orthoimages are a combination of three spectral bands in the visible wavelength (red, green, blue) forming a composition of 256 colors (GIF file). This combination of spectral bands makes it possible to view features on the images in colors similar to what the human eye sees (vegetation in green, water in dark blue ). Anyone who wants to consult topographic information for Canada can visit Toporama at: [[http://toporama.cits.nrcan.gc.ca](http://toporama.cits.nrcan.gc.ca)]([http://toporama.cits.nrcan.gc.ca](http://toporama.cits.nrcan.gc.ca)).
### GeoConnections Discovery Portal
Landsat 7 orthoimages are available at no charge from the GeoConnections discovery portal. The site offers orthoimages based on the following specifications: UTM projection or LCC, complete scenes by band, color composite of bands 7, 4, and 3 in GeoTIFF format, and half or quarter image. To find out more about this portal, visit the Web site at: [[http://geoconnections.org](http://geoconnections.org)]([http://geoconnections.org](http://geoconnections.org))
### 6.0 Conclusion
It is certain that this project for producing Canadian coverage with Landsat 7 orthoimages stands out as a distinctive example of partnerships between different public-sector organisations and the various levels of government. Through its agreements, this project has brought together the resources and data from 27 partners in the federal, provincial, and territorial governments.
Orthoimages represent a major source of data for all stakeholders in geomatics because they provide complete coverage of the country (south of the 81\({}^{\text{st}}\) parallel) at an accuracy that is better than acceptable. The creation of national coverage with Landsat 7 orthoimages makes it possible to implement an infrastructure shared by many partners and users of geospatial data. In addition to being a very valuable source of raster information, this orthoimage coverage also provides for integrating data from different sources, updating existing data, and creating new geospatial data in order to respond to user needs.
## References
CCSM (1984) National standards for the exchange of digital topographic data, Data Classification, Quality Evaluation and EDP File Format, Volume 1, E.M.R. Canada: 153,157
[PERSON], [PERSON], [PERSON] (2000) Unlocking the Potential for Landsat 7 Data, EOM, 28-31.
Natural Resources (1996) Standards and Specifications of the National Topographics Data Base, Minister of Supply and Services Canada, Catalogue No. M52-70/1996E.
[PERSON], [PERSON], [PERSON], [PERSON] (1994) La correction geometrique d'images satellitaires pour la base nationale de donnees topographiques, Geomatica 48:193-207.
[PERSON], [PERSON] (1992) La creation d'ortho-images avec MNA : description d'un nouveau systeme, Journal canadien de teledetection, 18 : 136-141.
[PERSON] (1995) Integration de donnees multi-sources :comparaison de methodes geometriques et radiometriques, International Journal of Remote Sensing, 16 :2795-2811.
[PERSON] (1996) La correction geometrique rigoureuse: un mal necessaire pour la sante de vos resultats, Journal canadien de teledetection, 22 : 184-189.
|
isprs
|
Remote sensing/geographic information system database for monitoring Canadian landmass
|
J Cihlar, L St-Laurent, M D'Iorio, D Mullins
|
https://doi.org/10.4095/193932
| 1,994
|
CC-BY
|
isprs/56f02f3f_b6e3_421f_91ac_0899c2f54523.md
|
Visibility analysis of huge outdoor advertisements along quadalupe bridge in EDSA Highway from structure-from-motion photogrammetry
[PERSON]
[PERSON]
[PERSON]
Department of Geodetic Engineering, University of the Philippines, Diliman, Quezon City, Philippines [EMAIL_ADDRESS], [EMAIL_ADDRESS], [EMAIL_ADDRESS]
###### Abstract
When it comes to business and marketing, huge outdoor advertising is considered as one of the best ways by contributing largely in disseminating information about a product, service or even raise awareness. With commuters or the people riding in a moving car as its target audience, the placement of advertising materials is very crucial since it should be visible and must deliver its message in a short span of time. This study tests the methodology of gathering data using action camera and DSLR mounted and situated on a moving vehicle, utilizing structure from motion techniques, to extract the geometry of the billboards from the point cloud generated from structure-from-motion as acquired from camera videos that would be used to represent these billboards in the three-dimensional space. These extracted geometries would be used for visibility analysis from a passenger's point of view by assessing the percentage of visible content and logos of each billboard from each point of observation along the path of a moving vehicle. The results of this study are nine sets of mean percent visibilities and raster representations that show the mean percent visibility of the billboards as viewed from the road of interest. To assess product placement effectiveness of the billboards, visibility percentage of the product logos contained in the nine billboards was also obtained.
Structure from motion, Close-Range Photogrammetry, Visibility Analysis, 3D GIS, Outdoor Advertising 2019
## 1 Introduction
### Background of the Study
In advertising, there are numerous means and ways for communicators to reach out to their target audiences. A communication tool that has been growing in popularity is the outdoor advertising. These advertisements have the unique ability to display messages 24 hours a day, seven days a week. Drivers and commuters pass through the same messages numerous times, which makes this kind of media effective. [12]. Various kinds of these huge advertisements are commonly placed in the urban areas and cities.
Here in the Philippines, in Metro Manila, the center of urbanization, these kinds of advertisements are seen everywhere especially on busy main thoroughflares. Like any other cities, Metro Manila is subject to various elements that could affect the effectiveness of these huge advertisements. With the billboards' proximity to fixed structures, the physical placement of the billboards is one the first considerations. The placement of advertising materials has been treated as one of the main considerations of the advertising companies on the effectiveness of the advertisements. [16]. To assess this, the visibility of these billboards from a moving vehicle passenger or a passerby is considered. If these billboards are not effectively by commuters, it will not serve its purpose. Moreover, this primarily targets the commuters or the people riding in a car. Therefore, billboards are considered placed effectively if the target audience can easily see or get the thought of the advertisements in a short span of time. [12].
Also, the placement of logos and the catchy phrases also contribute into the effectiveness of these. Once the driver or passenger notices these huge billboards but can't remember about the logo or brand name of the product or service it conveys, these billboards may be rendered useless. These logos and phrases create identity of the products, thus, shall be visible as possible. [1].
Several studies have ventured into the analysis of huge outdoor advertisements through photogrammetry and GIS [1, 18, 19]. However, few to none have ventured the use of structure from motion photogrammetry as a tool for visibility analysis.
From multiple overlapping images as taken in different views together with camera parameters, Structure from Motion (SIM) algorithms take these inputs to reconstruct 3D positions of points and camera poses in a common coordinate system. [10]. Through structure-from-motion photogrammetry, the 3D geometry of billboards can be estimated to assess the visibility of the billboards along a road of interest. In this study, the effectiveness of billboards as assessed through its visibility along Quadalupe Bridge in EDSA Highway will be analyzed.
### Study Area
EDSA or Epifiano de los Santos Avenue is a 24-kilometer road that passes through the six of the 16 cities of Metro Manila, namely Caloocan City, Quezon City, San Juan City, Mandalavong City, Makati City, and Pasay City. It connects the Northern and the Southern hemisphere of Metro and Greater Manila. It passes through Makati, titled as the \"Financial Center of the Philippines\", wherein The Philippine Stock Exchange is centered and most of the top corporations and institutions' main offices are located. This is where the majority of the working class travel from North to South and vice versa.
More specifically, since this research is devoted to assessing the visibility effectiveness of the billboards along EDSA Highway, the study area has been narrowed down to Guadalupe Bridgewherein billboards are rampantly congested and unstoppably rising in number. The said bridge connects the cities of Makati and Mandaluyong with a length of approximately 114.44 meters, while its width is approximately 18.70 meters, measuring 3.35 meters for every lane. Since there are 5 lanes on the bridge, but only the 2-lane part is to be considered, the width of the road is narrowed down to 6.70 meters.
The road on the Guadalupe Bridge is divided into 5 lanes but are still partitioned to 3 to 2 lanes shown in the figure above, captured from the data acquired. The 2-lane road, at the rightmost part, is the part of the bridge where the observer's (passenger's) point of view will be defined.
### Objectives
This study aims to test the methodology of assessing the effectiveness of huge outdoor advertisements from structure-from-motion photogrammetry outputs. The geometries as derived from the point cloud generated shall be used to obtain the visibility of billboard advertisements from viewpoints along the northbound side of Guadalupe Bridge in EDSA highway, and to assess the visibility percentage of the logos, contained in the billboard advertisements.
### 1.4 Scope and Limitations
Data acquisition was done on a Sunday morning, approximately 9:00 am to 10:00 am, when the traffic is less congested than when on a weekday to avoid being stuck in the traffic and to be able to acquire data under a certain set of circumstances.
A sedan-type vehicle was used to conduct the data acquisition and will set a standard height of eye level of a passenger to be 1.08 m from the road. Furthermore, this study only considered the northbound section of EDSA along Guadalupe Bridge, specifically along the path railway of MRT. Since the contents of the billboard change through time especially for LED-type billboards, the contents of the billboards at the time of acquisition were only considered for assessment. In the case of the determination of visibility, only the sets of point cloud generated from the processing will only be the basis for the generation of road, billboard, and obstruction geometries.
## 2 Methodology
### General Workflow
The figure below describes the general steps done in conducting this research.
### Data Acquisition
The video recordings were obtained using (a,b) two DSLR cameras that were oriented at two different directions, however directed to two dissimilar directions, both are held at the same height. Another acquisition was made with a GoPro camera, but this time, the camera was mounted on the front hood of the vehicle. A (c) GoPro Hero 3+ camera was mounted on the front hood of the vehicle via a car suction mount. There were two sets of recorded videos for this acquisition: angled such that only the road can be recorded and angled such that the road and the billboards are captured.
Nine billboards were identified and considered for analysis. The figure below shows the billboards and are named as follows.
### Data Pre-processing
#### 2.3.1 Point Cloud Generation
The first in the pre-processing is the extraction of images where a third-party software was used, the Free Video to JPG Converter to have the videos extracted into images. Shown below are the number of photos extracted per camera used:
These images were processed in Agisoft Photoscan using the default settings to generate three sets of point clouds--one from each camera source.
#### 2.3.2 Point Cloud Processing
The three sets of points clouds generated separately were aligned, georeferenced, and cleaned using CloudCompare. CloudCompare is an opensource 3D point cloud editing and processing software originally designed for dense 3D point
\begin{table}
\begin{tabular}{|c|c|} \hline Camera & Number of images extracted \\ \hline a & 457 \\ \hline b & 342 \\ \hline c & 532 \\ \hline \end{tabular}
\end{table}
Table 1: Images extracted
Figure 4: Billboards considered
Figure 3: Video acquisition: (a, b) DSLR Camera held inside the car; and (c) GoPro camera mounted at hood of car
Figure 2: General workflow
cloud comparison but extended its capability to various point cloud processing algorithms (CloudCompare Project Team, 2015). This software made the merging and cleaning of the three sets of point clouds possible. The merging was done by using the Align tool by picking (at least 4) equivalent point pairs while the cleaning was done by using the segment tool. Cleaning the point clouds from unwanted noises was necessary to clearly define the geometries of the billboards as well as obstructions. For a more convenient processing and analysis later on, the cleaned point cloud was reoriented such that the road's direction is directly facing the north and that the properly merged and scaled final point cloud was split and exported into 3 sets of LAS files having separate files for the billboards, road, and obstructions. Moreover, for the analysis of logo placement, from the billboards LAS file, the logos on each billboard were segmented such that the logos will have a separate LAS file.
#### 2.3.3 Billboard-road-obstruction geometry
From these sets of point cloud, a billboard-road-obstructions vector-based environment was generated from the final point cloud. To achieve this, the three sets of LAS files were digitized according to these categorical features: billboards, road, and obstructions. Through ArcScene, LAS files were converted into multipoint, to manually digitize the extent of the road, billboards, and obstructions to produce three vector files.
#### 2.3.4 Observer and Target Points
Before proceeding with the visibility analysis, after the geometry of the billboards were extracted by digitizing, observer points were generated by dividing the road polygon into 100 equal portions, using Fishnet (2 columns to represent 2 lanes of road in EDSA and 50 rows for the length of the road) and creating a point at its centroid. The elevation of this point was added with the 1.08 m to simulate the visibility from a passenger's point-of-view. Apart from creating 100 observer points, 100 equally-spaced target points, positioned on the surface of the digitized billboards, were also created for every billboard. Therefore, there were a total of 900 target points for each observer point on the road. On the other hand, to define the target points for the logos, the extents of the logos of the billboards were intersected with the 100 target points on every billboard, to select the n number of target points per billboard.
### Analysis
To define the connection of the observer points and the target points, Construct Sightline tool was used in ArcScene. This requires the observer points, the target features, observer height and target height. The observer and target heights were computed by adding Z information on every point. This tool also adds direction attribute information about the sightline from each of the observer point to each of the target points.
In the determination of the visible target points in the billboards from the observer points, Intervisibility Tool was utilized. The constructed sightlines together with the obstruction features were the inputs for this tool. The obstructions comprise of billboard features with the exception of the billboard being observed and the initially created obstruction polygon from the point clouds. The Intervisibility tool added a new field in the attribute table of the sightlines. It would only give two values 0 and 1, for not visible and visible, respectively. In order to sort the visibility on every observer point, field calculator was used to filter out the sightlines visible (with field value = 1) from the limitations of a human eye along the road of interest such that the vertical angle ranges from -70 deg to 60 deg and that the azimuth ranges from 0 deg to 90 deg (left side of field of view not considered anymore).
### Assessment
#### 2.5.1 Mean Percent Visibility (MPV)
The overall visibility of the billboards along the road is tested and measured here. The MPV or Mean Percent Visibility of each billboard was computed based on Equation 1 below:
\[MPV=\frac{S_{\text{ref}}}{S_{\text{ref}}}*100\% \tag{1}\]
Where \(S_{\text{ref}}=\text{total number of visible targets}\)
\[S_{\text{ref}}=\text{total number of possible targets}\]
For each billboard, there are a total of 10,000 possible targets and the total number of visible targets was determined by getting the sum of all visible targets from each observer point. MPV was determined for the left lane, right lane, and for the whole extent of road.
#### 2.5.2 Visibility Raster of the Billboards
For each of the 100 observer points along, the number of visible target points were determined for each billboard. A maximum value of 100, a perfect visibility, can be assigned to an observer point. With a percent visibility assigned to each observer point, visibility raster representations were produced by using these values of observer points as sample points to interpolate the visibility surface along the road using IDW. There would be nine (9) visibility raster files which would be represented with color ramp from red (0% visibility) to green (100% visibility). There were 10 classifications made to normalize the percent visibility into 10 breaks at 10% increments. This is also to be able to compare the visibility of the billboards at certain parts of the road. Comparing the 9 visibility raster representations would give an idea at what certain parts of the road do some or all contents of billboards are visible.
#### 2.5.3 Mean Logo Percent Visibility (MLPV)
To assess the effectivity of the placement of the logos of the products contained in the billboards, The Mean Logo Percent Visibility was obtained. This was computed using the same equation 1; however, in this case, the total number of possible targets are different since only those target points covered by the extent of the logo were considered for visibility analysis.
Figure 5: Rough diagram of observer and target points: (a) Observer points along left and right lane of road; and (b) Target points per billboard (blue and orange); Target points of logos (orange)
#### 2.5.4 Validation
For the validation of the results, the researchers conducted a survey using Google Forms. At the beginning of the survey, three videos were attached, and respondents were asked to watch the videos before proceeding with the survey. The order of numbering of the billboards were also illustrated in the survey. The videos consisted of sample video recordings that was obtained during the data acquisition done by the researchers. Videos 1 & 2 were captured using DSLR Canon 1100D and were shown to be able to represent the standard vision of a human eye seated at the passenger's seat of a moving sedan-type vehicle. The first video was captured from a moving vehicle situated in the right lane of the road. The second video was taken while in a moving vehicle passing through the left lane of the road.
The first part of the survey is the Percent Visibility. In this section, the respondents were asked what percent each billboard been visible throughout the duration of the video (passing through Guadalupe Bridge). The choices were 0% - 100% at increments of 10. The next part of the survey is the Logo Placement Assessment. In this part, the billboards were divided into 9 sections and were labeled from A to I as shown in the next figure. The respondents were then asked which box they have remembered seeing the logos placed. This was also done for all the nine billboards, despite that the 2 billboards are LED boards
## 3 Results and Discussion
### Point cloud
From the images extracted from the video acquisition, sets of points clouds were generated as shown in Figure 7: From these sets of point clouds, the aligned and cleaned point cloud was generated as shown below. This final point cloud was the basis of the generation of the geometries of the billboards, road, and obstructions to be utilized for visibility analysis.
### Billboard-road-obstruction geometry
The next figure shows the billboard-road-obstructions vector-based environment produced after digitizing the various sets of features. All of the billboards were digitized as rectangular. The road was represented by the gray rectangular figure and the obstructions were digitized as green irregularly-shaped figure. Seen in Figure 9 are the digitized road, billboards, and obstructions together with the corresponding 100 target points on each billboard and 100 observer points along the road. Similarly, the same environment is shown in Figure 10 but the target points for the logos are seen.
### Assessment
#### 3.3.1 Mean Percent Visibility
Using equation 1, the Mean Percent Visibility for left, right, and both lanes were calculated. The table below summarizes the Mean Percent Visibility of each billboard as assessed along
Figure 10: Road and billboards polygons with the varying number of target points on each billboard
Figure 6: Validation surveys. (a) Percent Visibility Survey; (b) Logo Placement Assessment
Figure 7: Generated point clouds of (a) billboards; (b) road; and (c) obstructions
Figure 8: Final point cloud
Figure 9: Billboard-road-obstruction environment with observer and target pointseach lane and along the whole width of the two-lane road from the sets of observer points defined. The farther billboards relative to the road (Billboards 7-9), have lower MPV than billboards 1, 2, and 4 which are placed near the road. Moreover, these billboards have low MPV since their orientation is not directly facing the road of interest even though their size is bigger than the other billboards. In the case of billboards 3 and 5, even though they are placed relatively near the road, they attained a low MPV also because they are placed at a lower height as compared to other billboards; hence, they were completely blocked by the obstructions on the side of the road.
#### 3.3.2 Visibility Raster of the Billboards
Shown above are the road visibility raster representations of the nine billboards, placed side by side for comparison. It can be seen that the nearer billboards, with respect to the road, are more visible, in general, than those billboards that are located farther, with respect to the road. Although, billboards 6, 7, 8 and 9 are bigger in size than billboards 1, 2, 3, and 4, they are placed at a more distant location; hence, they were blocked by the obstructions along the road - plants, bridge railings, and other nearer billboards.
The visibility raster representations also show that the obstructions were located at the beginning and at the end of the road. The middle part has relatively small amount of obstructions to none. This matched with the actual situation in Guadalupe bridge, northbound section. The obstructions are located at the start and at the end of the road, and few obstructions are placed at the middle part. Another reason for this is the placement of the billboards in relation to one another that these billboards may obstruct one another. This is evident in the visibility raster representations of Billboards 6, 7, 8, and 9. These billboards were placed behind the Billboards 1, 2, 3, 4, and 5 and were being blocked when passing through the middle portion of the road.
#### 3.3.3 Mean Logo Percent Visibility (MLPV)
The logo is the most important part of an advertisement, besides the fact that it informs the consumers of what brand is being endorsed in an advertisement, it also gives branding to a specific product. Therefore, the placement, sizing, and format, in general, of a logo shall be planned well to be effective and efficient of its purpose.
As seen in the Table 3, billboards 1 and 2 attained the 100% logo percent visibility while billboards 6, 8 and 9 attained logo percent visibility lower than 30%. These billboards were the larger ones, compared with the others. Possible explanation for this is the fact that these billboards, billboards 6, 8, and 9, despite their size, their logos only occupied small space on the billboards and are too small to be seen from the observer points and therefore, by the target market, which are the passengers in a moving vehicle.
#### 3.3.4 Validation survey results
The validation survey was participated by 40 respondents. The mean percent visibility was obtained by getting the mean of the answers of the respondents. Based on the results of the survey, which is shown in the table below, billboards 2, 4, 5, 6 and 7 attained high visibility percentages which are within the range of 71% to 79% while billboards 1, 3, 8, and 9 attained low visibility percentages that are within the range of 39% to 52%. Even though these percent visibilities from the survey results are subject to personal factors, the proportionality of the survey results with the actual results validate the MPV determined from the visibility analysis performed. For the majority of the survey results, the percent visibilities corresponded with the results from the visibility analysis performed with the exception of Billboard 1. Due to the limitations that survey respondents were just asked to watch the video traversing the road instead of asking them to pass through the road and observe the billboards, this disagreement of results in Billboard 1 is most likely due to the angle of placement of the camera during the acquisition of videos 1 & 2.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Billboard & Left Lane & Right Lane & Both Lanes \\ \hline
1 & 97.32 & 95.90 & 96.61 \\ \hline
2 & 99.42 & 94.00 & 96.71 \\ \hline
3 & 30.00 & 52.08 & 41.04 \\ \hline
4 & 97.10 & 79.14 & 88.12 \\ \hline
5 & 57.66 & 38.86 & 48.26 \\ \hline
6 & 66.44 & 41.08 & 53.76 \\ \hline
7 & 5.86 & 74.04 & 39.95 \\ \hline
8 & 32.64 & 24.82 & 28.73 \\ \hline
9 & 34.74 & 25.22 & 28.98 \\ \hline \end{tabular}
\end{table}
Table 2: Mean Percent Visibility
Figure 11: Visibility raster of each billboard along the road
\begin{table}
\begin{tabular}{|c|c|} \hline Billboards & MLPV \\ \hline
1 & 100.00 \\ \hline
2 & 100.00 \\ \hline
3 & 35.53 \\ \hline
4 & 90.11 \\ \hline
5 & 86.50 \\ \hline
6 & 28.33 \\ \hline
7 & 60.00 \\ \hline
8 & 22.67 \\ \hline
9 & 24.00 \\ \hline \end{tabular}
\end{table}
Table 3: Mean Logo Percent Visibility (MLPV)A similar comparison was performed to validate the MLPV. However, in this case, instead of asking percent visibilities from respondents, they were asked to identify the location of the logos from defined area divisions on a billboard. It can be clearly seen in Table 5 that the survey results contradicted the results from the visibility analysis. Even though logos were small and badly placed on a billboard, viewers still recognized them and identified their locations. This can be accounted for the product familiarity of the respondents. Since the similar product displayed on billboards 5-9 was a popular one in the country, they were able to remember its location as compared to other billboards whose logos were appropriately sized and space but were less popular and striking. Nonetheless, this study only focuses on the visibility of the billboards and their logos and cognitive analysis towards advertisements was not considered here.
## 4 Conclusions and Recommendations
### Conclusions
This research aimed to test the application of structure-from-motion photogrammetry to assess the effectiveness of huge outdoor advertisements through visibility analysis. From the analyses performed, it was determined that the mean percent visibilities of the nine billboards as seen along the along Guadalupe Bridge in EDSA Highway ranged from 28.73% to 96.71%. Moreover, the mean percent visibilities of the product logos on each of the billboards were also determined to assess if the logos were properly placed on the billboard. It was seen that 4 out of the nine billboards had a mean logo percent visibility less than 50%-denoting them ineffective and bad logo placement in terms of their visibilities. These percent visibilities were also visualized through visibility raster representations per billboard.
A validation survey was conducted to confirm the analysis performed. For the most part, the validation confirmed the visibility analysis performed; however, due to limitations in the conduct of the validation survey itself as well as the disregard of the familiarity of the respondents to the area, the results varied for some billboards. In terms of the analysis of the logos, the validation survey did not confirm at all the results of the visibility analysis. This can be due to the prior knowledge and familiarity of the respondents to the area and the possible popularity and design of the logos of products observed.
The placements and orientation of the billboards as well as the placement of the logos are vital to the effectiveness of the billboards for advertising purposes. Through structure from motion photogrammetry, visibility analysis can be performed to various huge outdoor advertisements for advertisers to strategically identify prime locations for advertisements as well as the design and placements of its contents.
### Recommendations
This research focused and relied on the method of data acquisition from handheld cameras only. The researchers recommend using additional methods on acquiring data such UAV to produce a denser point cloud of the environment for a better visibility analysis. As for other concerns, the researchers also recommend using other types of vehicles which may result to different standard height of eye level of a passenger. Moreover, since this study only considered the two rightmost lanes from the 5-lane road on the bridge, the researchers would like to recommend getting the visibility of each billboards on the rest, leftmost lanes of the road. Also, since there are other roads in Guadalupe, e.g. oriented perpendicular to the northbound of EDSA, these could be also considered to compare visibilities of the billboards because these billboards can also be seen from that point of view.
This research could be further improved by looking on the valuation aspect of the billboards. In the assessment of the effectiveness of the billboards, the costing for ad placement is not only in terms of size but should also consider the placements and orientations are factors.
## References
* [1]L. L.
|
isprs
|
VISIBILITY ANALYSIS OF HUGE OUTDOOR ADVERTISEMENTS ALONG GUADALUPE BRIDGE IN EDSA HIGHWAY FROM STRUCTURE-FROM-MOTION PHOTOGRAMMETRY
|
M. N. Manansala, R. M. Ong, K. A. Vergara
|
https://doi.org/10.5194/isprs-archives-xlii-4-w16-385-2019
| 2,019
|
CC-BY
|
isprs/48261e50_050c_4970_b8a2_507729ddabca.md
|
Scaling up Sagrebush Chemistry with Near-Infrared Spectroscopy and UAS-acquired Hyperspectral Imagery
[PERSON],*
[PERSON]
[PERSON]
[PERSON]
[PERSON]
[PERSON]
[PERSON]
[PERSON]
[PERSON]
[PERSON]
[PERSON]
Footnote 1: [[https://doi.org/10.5194/ispsr-archives-XL.IV-M-3-2021-127-2021](https://doi.org/10.5194/ispsr-archives-XL.IV-M-3-2021-127-2021)]([https://doi.org/10.5194/ispsr-archives-XL.IV-M-3-2021-127-2021](https://doi.org/10.5194/ispsr-archives-XL.IV-M-3-2021-127-2021)) /
Footnote 2: [[http://www.springer.com/article/pii/pubs/article.php?id=2021](http://www.springer.com/article/pii/pubs/article.php?id=2021)]([http://www.springer.com/article/pii/pubs/article.php?id=2021](http://www.springer.com/article/pii/pubs/article.php?id=2021))
Footnote 3: [[https://doi.org/10.5194/ispsr-archives-XL.IV-M-3-2021-127-2021](https://doi.org/10.5194/ispsr-archives-XL.IV-M-3-2021-127-2021)]([https://doi.org/10.5194/ispsr-archives-XL.IV-M-3-2021-127-2021](https://doi.org/10.5194/ispsr-archives-XL.IV-M-3-2021-127-2021)) /
###### Abstract
Sagebrush ecosystems (_Artemisia_ spp.) face many threats including large wildfires and conversion to invasive annuals, and thus are the focus of intense restoration efforts across the western United States. Specific attention has been given to restoration of sagebrush systems for threatened herbivores, such as Greater Sage-Grouse (_Centroceus uuphasianus_) and pygmy rabbits (_Bracchylagus ldabonensis_), reliant on sagebrush as forage. Despite this, plant chemistry (e.g., crude protein, monoterpenes and phenolics) is rarely considered during reseeding efforts or when deciding which areas to conserve. Near-infrared spectroscopy (NIRS) has proven effective in predicting plant chemistry under laboratory conditions in a variety of ecosystems, including the sagebrush steppe. Our objectives were to demonstrate the scalability of these models from the laboratory to the field, and in the air with a hyperspectral sensor on an unoccupied aerial system (UAS). Sagebrush leaf samples were collected at a study site in eastern Idaho, USA. Plants were scanned with an ASD FieldSpec at 5 spectradiometer in the field and laboratory, and a subset of the same plants were imaged with a SteadiDrone Hexacopter UAS equipped with a Rikola hyperspectral sensor (HSI). All three sensors generated spectral patterns that were distinct among species and morphotypes of sagebrush at specific wavelengths. Lab-based NIRS was accurate for predicting crude protein and total monoterpenes (R\({}^{2}\) = 0.7-0.8), but the same NIRS sensor in the field was unable to predict either crude protein or total monoterpenes (R\({}^{2}\) \(<\) 0.1). The hyperspectral sensor on the UAS was unable to predict most chemicals (R\({}^{2}\) \(<\) 0.2), likely due to a combination of too few bands in the Rikola HSI camera (16 bands), the range of wavelengths (500-900 nm), and small sample size of overlapping plants (n = 28-60). These results show both the potential for scaling NIRS from the lab to the field and the challenges in predicting complex plant chemistry with hyperspectral UAS. We conclude with recommendations for next steps in applying UAS to sagebrush ecosystems with a variety of new sensors.
## 1 Introduction
Sagebrushes (_Artemisia_ spp.) are the dominant vegetation covering over 40 million ha of the western United States ([PERSON] et al., 2018), but have declined due to increased wildfires, conversion to cheapfas (_Bromus tectorum_), and juniper (_Juniperus_ spp.) encroachment. Sagebrush are an important source of food and cover for wildlife and livestock. For example, Greater Sage-Grouse (_Centrocexera aurophasians_) and pygmy rabbits (_Bracchylagus ldabonensis_) specialize on sagebrush, consuming as much as 99% for their winter diet ([PERSON] and [PERSON], 1975; [PERSON] and [PERSON], 1980). Sagebrush leaves contain a complex mixture of plant chemicals to protect against herbivory, including volatile monotepenes and phenolics, but are also a good source of crude protein. This chemistry is highly variable among and within sites ([PERSON], 2020, [PERSON] et al., 2020) and influences diet and habitat selection by wild herbivores at varying spatial scales ([PERSON] et al., 2013; [PERSON] et al., 2014; [PERSON] et al., 2020).
To better understand plant-herbivore interactions, we need to map this plant chemistry across the landscape. The broad distribution of sagebrush across the western United States has been coarsely mapped (e.g., LANDFIRE, GAP, NLCD), but these maps are at 30-m to 500-m spatial resolution and do not track finer-scale patterns in distinct species with phytochemical trails that matter to herbivores ([PERSON] et al., 2021). Several remote sensing techniques show promise in filling the gap between broad-scale distribution maps and plant-scale chemistry. One sensor technology for predicting plant and leaf-scale chemistry is near-infrared spectroscopy (NIRS). The spectral signatures measured with NIRS depend on the number and type of C--H, N--H and O--H chemical bonds, and can be related to plant defensive and nutritional chemistry ([PERSON] et al., 1998; [PERSON] et al., 2010; [PERSON], 2020).
Unoccupied aerial systems (UAS) have emerged as a viable option for habitat mapping of vegetation and chemical traits at moderately large extents ([PERSON] and [PERSON], 2013; [PERSON] et al., 2018). Additionally, UAS can mount a variety of sensors such as multispectral, thermal, and hyperspectral cameras ([PERSON] et al., 2017; [PERSON] et al., 2019; [PERSON] and [PERSON], 2020), and are flexible, cheap, and mobile to deploy across the landscape ([PERSON] and [PERSON], 2012). Previous work has shown UAS-based sensors can map shrub structure ([PERSON] et al., 2016; [PERSON] et al., 2018), but relatively little work has been done to predict phytochemicals in sagebrush. Recent attempts at landscape mapping of diet quality involved classifying sagebrush structural morphotypes with unique chemical profiles, but relied on regression kriging ([PERSON] et al., 2020), a type of spatial interpolation requiring a large amount of leaf sampling and laboratory analysis that does not directly predict plant chemical concentrations. NIRS ([PERSON] et al., 2016; [PERSON], 2020) and airborne hyperspectral sensors ([PERSON] et al., 2012b) have potential to link near- and short-wavelength infrared signals to plant chemistry. Recent technological advances have miniaturized hyperspectral sensors and allowedfor UAS platforms to capture high-resolution imagery at these longer wavelengths.
In this study, our objective was to evaluate NIRS and hyperspectral UAS for predicting plant chemistry in sagebrush and classifying sagebrush species and morphotypes. To accomplish this, we generated equations for plant chemistry with near-infrared spectroscopy collected in both the lab and the field. Next, we tested whether a UAS-based hyperspectral sensor could predict those same plant chemicals across landscapes.
## 2 Methods
### Study Site
We conducted research at the \"Cedar Gulch\" study site (lat 44+41'57\"N, long 113+17'12\"W, elevation 1885-1925 m), a -155 ha area near Leadore, Idaho, in Lemhi County (Figure 1). Average temperatures in January were -6.9 \"C, 14.9 \"C in June, and the site received 32.8 cm precipitation annually (WRCC, 2016). The dominant vegetation at Cedar Gulch was Wyoming big sagebrush (_A. t. wyomingenless_), which occurs both onounds with relatively deeper soils (on mound) where individual plants are large and short-saturated \"dwarf\" patches of sagebrush in the matrix between months where the soil is shallower. The dwarf patches were primarily low-growing Wyoming big sagebrush (dwarf Wyoming) mixed with black sagebrush (_A. nova_). These morphotypes differed in both structural characteristics ([PERSON] et al., 2018), thermal properties ([PERSON] et al., 2018) and forage quality ([PERSON] et al., 2020).
### Near-infrared Reflectance Spectroscopy
The ASD FieldSpec 4 spectroradiometer was used to measure continuous near infrared wavelength reflectance from 350 nm to 2500 nm in all the sagebrush samples under both laboratory and field conditions. In the lab, each ground dried sagebrush sample was placed in a sealed clear plastic bag and spread homogeneously on a black countertop with no countertop surface visible through the biomass. After calibrating and optimizing the ASD FieldSpec 4 to a pure white reflectance spectralon plate according to standard protocol in the user manual, it was then used to measure the reflectance of each sagebrush sample. Thirty replicate scans were collected for each sample. The instrument was recalibrated and optimized every 15 samples. In the field, we used an 8-degree FOV attachment held 0.5 m above the plant during each scan leading to a footprint of approximately 7 cm with white reflectance calibration every 5 scans or after every other scan if light conditions were changing.
### Unoccupied Aerial System Flights
A portion of the study area was flown using a SteadiDrone Hexcopter UAS (SteadiDrone, Cape Town, South Africa) in June 2016. Four flights were conducted with a flight height of 25 m for approximately 20 minutes each covering 0.36-0.45 ha (Figure 1). We collected hyperspectral imagery of each flight area using the Rikola HSI (Senop Oy, Oulu, Finland) hyperspectral camera. The Rikola HSI camera collects spectra for each pixel within the range of 500-900 nm with 16 programmable bands for any increment within that range ([PERSON] et al., 2018a). For this study, we used a band combination from 550-849 nm (-20 nm increments).
Images acquired from the flights were pre-processed using the camera manufacturer software. Noise and vignetting were removed for image clarity, and digital number values (DN) were converted to radiance (W/(m\({}^{2}\) x srad x m)) ([PERSON] et al., 2017; [PERSON] et al., 2018a). The Rikola HSI software aligns each image, but we found that the imagery had too much shift in between each band for the images to align properly ([PERSON] et al., 2018b). This shift was caused by the movement in the drone and an approximately 10 ms delay in shooting each band by the camera. Therefore, we photogrammetrically processed each band individually by flight using Agisoft Metashape (Agisoft LLC, St. Petersburg, Russia). Processing for each flight included ground control placement, image mosaicking, point cloud generation, digital surface model generation, and aligning chunks to create a 16-band orthomosaic.
After the hyperspectral orthomosaics were created, we used GPS points acquired from previous field surveys to identify individual plants and species from the images. Only plants with associated chemical data were used in the analysis. We extracted pixels representing unmixed spectral signatures of leaves and averaged by plant. After spectra were extracted, the values were standardized. Minimum values for each flight were calculated using the values closest to the lower 0.05%, and maximum values were calculated using the values closest to the upper 0.05% of the range of values. After maximum and minimum values were determined for each flight, the spectra were standardized with (x-min)/(max-min), where \(x\) is the value of the spectra.
### Lab Chemistry
After field NIRS scans and UAS flights were completed, we clipped leaf samples from each plant and kept the samples on ice until stored at -20 \"C in the lab for later analysis. Leaves and stems were ground in liquid nitrogen (-2 mm) and immediately subsampled for crude protein and monoterne analysis. For crude protein, a subset of 1-2 g of ground sample was dried at 64 \"C to a constant dry weight (at least 48 h) and analysed for total nitrogen content at Diary One Forage Labs (Ithaca, NY). Total nitrogen (%) values were converted to crude protein by multiplying each value by 6.25 ([PERSON], 1983). For monoterne, a subset of 100 mg of sample was transferred to a headspace vial and analysed using headspace gas chromatography (Agilent 7694 Headspace Sampler, Agilent 6890 Series GC). See [PERSON] (2020) and [PERSON] et al. (2020) for more details on chemical analysis.
### Statistical Analyses
We performed all statistical analyses with Camo Analytics Unscrambler chemometric software (Montclair, NJ, USA). For the laboratory and field collected NIRS dataset, the thirty
Figure 1: Flight footprints with inset map showing location in Idaho, USA.
replicate reflectance scans were checked for outliers with Unscrambler's outlier detection algorithm and averaged to one spectral profile per sample. For laboratory samples, we converted each spectrum to absorbance values using a log\({}_{10}\)(1/\(R\)) transformation, where \(R\) is reflectance. Spectral absorbance values were transformed by taking a 1\({}^{\mathrm{st}}\) gap derivative every 1 nm. Laboratory spectra were truncated from 450 nm to 2350 nm. The distributions of response variables were checked for normality for all field and laboratory ASD samples. Unscrambler was then used to analyse spectra using partial least squares regressions (PLSR) between NIR spectral values (i.e., predictor variables) and plant chemistry (i.e., response variables) to produce NIRS-predicted chemistry. Each model was independently calibrated and validated using 20-fold cross-validation and results were downweighted to prevent overfitting of the models. The UAS samples were not downweighted, and leave-one-out cross-validation was utilized for the PLSR validation instead of 20-fold cross validation.
## 3 Results
Overall, lab-based NIRS predicted plant chemistry more accurately than field-based NIRS. Crude protein was predicted best with lab-based NIRS (r\({}^{\mathrm{z}}\) = 0.79), but poorly with field-based NIRS (r\({}^{\mathrm{z}}\) = 0.03) and UAS-based hyperspectral (r\({}^{\mathrm{z}}\) = 0.00). For both lab-based NIRS and UAS-based hyperspectral, total monoterpenes were predicted better than individual monoterpenes (Table 1, Figure 2).
Hyperspectral UAS showed promise in differentiating species (black sagebrush from Wyoming big sagebrush) and morphotypes within a species (dwarf Wyoming big sagebrush from large Wyoming big sagebrush) (Figure 3). There was consistent distinction between morphotypes within a species (large and dwarf Wyoming) and between species (Wyoming and Black) at specific wavelengths (Figure 3f, R1-R3). Where Wyo more similar than black. Consistent differentiation of species with similar morphotype in shared spatial context (between mounds).
have more success predicting crude protein and other plant chemistry across the landscape.
Total monoterpenes were predicted well by lab-based NIRS (r\({}^{2}\) = 0.69), and the UAS hyperspectral camera (r\({}^{2}\) = 0.24) performed better than regression kriging at Cedar Gulch reported by [PERSON] et al. (2020) (r\({}^{2}\) < 0.1). [PERSON] and [PERSON] (2015) detected an absorption feature at 1.63 um and attributed that to C--H bonds on phenols and aromatics such as terpenoids, suggesting that a camera with SWIR capabilities could better detect and predict sagewhts plant chemistry and could explain the better prediction with lab-based NIRS. The signal was weaker in wet leaves compared to dry leaves ([PERSON] and [PERSON], 2015), matching up with the results seen here with lab versus field-based NIRS and previous work in the lab by [PERSON] et al. (2016).
Despite our finding that the Rikola HSI camera was unable to predict crude protein or total monoterpenes, it showed potential for classifying sagewhts species and morphotypes. These sagewhts species may be hard to distinguish based on structure from the ground or in the air and are often misclassified in land cover maps from satellite images ([PERSON] et al., 2021), however, these species have important differences in phytochemicals and potential use by herbivores ([PERSON] et al., 2013).
Next steps involve testing a hyperspectral sensor with 274 bands over a similar wavelength (Headwall NIR + LiDAR), and another sensor with bands into the SWIR (Headwall NIR + SWIR). Future work should take advantage of larger sample size, sites with more chemical diversity, and a better balance between species and chemo-types. Additionally, the continued success of NIRS in lab environments shows potential for scaling to the field to classify sagewhts and predict chemistry ([PERSON], 2020). We recommend using existing datasets (either NIRS or lab-based chemistry) to set up which bands are more important for the goal in hand (i.e., differentiation between chemotypes or predicting chemistry of interest) to select the sensor best suited to that purpose. Alternatively, UAS could be used by managers in exploratory work to determine what is differentiable from the air and to decide what should be sampled on the ground to test whether these spectral differences are chemical or physical (e.g., soil related), or whether sites may contain hybrid zones. In this way, UAS could iteratively serve as a tool for adaptive management in a changing world.
## Acknowledgements
Thanks to [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] for data collection and processing. Funding support from NSF (EPSCoR OIA-175324, OIA-1826801 and DEB-1146194 to JSF, DEB-1146368 to LAS, DEB-114616 to JLR), BLM (L16 AC00137 to JSF), USDA NIFA (Hatch Project 1005876 to LAS). AmericaView, U.S. Geological Survey under Grant/Cooperative Agreement No. G18 AP00077 to DMD.
## References
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2017. Hyperspectral imaging: A review on UAV-based sensors, data processing and applications for agriculture and forestry. _Remote Sensing_, 9(11), 1110. doi.org/10.3390/rs9111110
* [PERSON] and [PERSON] (2013) [PERSON], [PERSON], 2013. Lightweight unmanned aerial vehicles will revolutionize spatial ecology. _Frontiers in Ecology and the Environment_, 11(3), 138-146. doi.org/10.1890/120150
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON], 2016. Ultra-fine grain landscape-scale quantification of dryland vegetation structure with drone-acquired structure-from-motion photogrammetry. _Remote Sensing of Environment_, 183, 129-143. doi.org/10.1016/j.rse.2016.05.019
* [PERSON] et al. (1998) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 1998. Ecological applications of near infrared reflectance spectroscopy--a tool for rapid, cost-effective prediction of the composition of plant and animal tissues and aspects of animal performance. _Oecologia_, 116(3), 293-305.
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2020. Winter foraging ecology of Greater Sage-Grouse in a post-fire landscape. _Journal of Ariol Environments_, 178, 104154. doi.org/10.1016/j.jaridenv.2020.104154
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2021. Assessing accuracy of GAP and LANDFIRE land cover datasets in winter habitats used by greater age-grouse in Idaho and Wyoming, USA. _Journal of Environmental Management_, 280, 111720. doi.org/10.1016/j.jenvman.2020.111720
* [PERSON] et al. (2013) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2013. Phytochemistry predicts habitat selection by an avian herbivore at multiple spatial scales. _Ecology_, 94(2), 308-314. doi.org/10.1890/12-1313.1
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2019. Challenges and future perspectives of multi-/Hyperspectral thermal infrared remote sensing for crop water-stress detection: A review. _Remote Sensing_, 11(10), 1240. doi.org/10.3390/rs11101240
* [PERSON] and [PERSON] (1980) [PERSON], [PERSON], 1980. Habitat and dietary relationships of the pygmy rabbit. _Journal of Range Management_, 33(2), 136-142. doi.org/10.2307/3898429
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline & & & & \multicolumn{3}{c|}{Calibration} & \multicolumn{3}{c|}{Cross-validation} \\ \cline{4-12} \multicolumn{1}{|c|}{Phytochemical} & Instrument & n & RMSE\({}_{\text{C}}\) & SEC & r\({}^{2}\) & B\({}_{1}\) & B\({}_{0}\) & RMSE\({}_{\text{C}}\) & SEC\({}_{\text{V}}\) & r\({}^{2}\) \\ \hline Crude protein (\%) & Lab NIRS & 236 & 0.96 & 0.96 & 0.82 & 0.80 & 2.7 & 1.03 & 1.04 & 0.79 \\ & Field NIRS & 40 & 2.16 & 2.19 & 0.12 & 0.14 & 11.0 & 2.44 & 2.47 & 0.03 \\ & UAS & 28 & 2.39 & 2.44 & 0.16 & -0.01 & 13.1 & 3.47 & 3.54 & 0.00 \\ \hline Total monoterpenes & Lab NIRS & 234 & 110 & 110 & 0.68 & 0.61 & 160 & 125 & 126 & 0.59 \\ (AUC/mg DW) & Field NIRS & 43 & 193 & 196 & 0.20 & 0.12 & 298 & 214 & 217 & 0.06 \\ & UAS & 31 & 166 & 169 & 0.44 & 0.38 & 273 & 203 & 206 & 0.24 \\ \hline \end{tabular}
\end{table}
Table 1: Calibration and validation statistics for near-infrared reflectance spectroscopy (NIRS) prediction of sagebrush phytochemistry at the Cedar Gulch study site in Idaho, USA.
[PERSON], [PERSON], [PERSON], 2017. The need for accurate geometric and radiometric corrections of drone-borne hyperspectral data for mineral exploration: Mephysto--A toolbox for pre-processing drone-borne hyperspectral data. _Remote Sensing_, 9(1), 88. doi.org/10.3390/rs9010088
* [PERSON] and [PERSON] (2012) [PERSON], Wich, S.A. 2012. Dawn of drone ecology: low-cost autonomous aerial vehicles for conservation. _Tropical Conservation Science_, 5(2), 121-132. doi.org/10.1177/194008291200500202
* [PERSON] (2015) [PERSON], Skidmore, A.K., 2015. Plant phenolics and absorption features in vegetation reflectance spectra near 1.66 jun, _International Journal of Applied Earth Observation and Geoinformation_, 43, 55-83. doi.org/10.1016/j.jag.2015.01.010
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2018. On the use of unmanned aerial systems for environmental monitoring. _Remote Sensing_, 10(4), 641. doi.org/10.3390/rs10040641
* Messina and Modica (2020) Messina, G., Modica, G., 2020. Applications of UAV thermal imagery in precision agriculture: State of the art and future research outlook. _Remote Sensing_, 12(9), 1491. doi.org/10.3390/rs12091491
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2018. Habitat structure modifies microclimate: an approach for mapping fine-scale thermal refuge. _Methods in Ecology and Evolution_, 9(6), 1648-1657. doi.org/10.1111/2041-210X.13008
* [PERSON] et al. (2012a) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2012a. Remote sensing of agebrush canopy nitrogen. _Remote Sensing of Environment_, 124, 217-223. doi.org/10.1016/j.rse.2012.05.002
* [PERSON] et al. (2012b) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2012b. Spectroscopic detection of nitrogen concentrations in agebrush. _Remote Sensing Letters_, 3(4), 285-294. doi.org/10.1080/01431161.2011.580017
* [PERSON] et al. (2010) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2010. Palatability mapping: a koala's eye view of spatial variation in habitat quality. _Ecology_, 91(11), 3165-3176. doi.org/10.1890/09-1714.1
* [PERSON] et al. (2018a) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2018a. Ultra-light aircraft-based hyperspectral and colour-infrared imaging to identify deciduous tree species in an urban environment. _Remote Sensing_, 10(10), 1668. doi.org/10.3390/rs10101668
* [PERSON] et al. (2018b) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2018b. Imaging from manned ultra-light and unmanned aerial vehicles for estimating properties of spring wheat. _Precision Agriculture_, 19(5), 876-894. doi.org/10.1007/s1119-018-9562-9
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2016. Nutritional analysis of agebrush by near-infrared reflectance spectroscopy. _Journal of Arid Environments_, 134, 125-131. doi.org/10.1016/j.jaridenv.2016.07.003
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2018. Unmanned aerial systems measure structural habitat features for wildlife across multiple scales. _Methods in Ecology and Evolution_, 9(3), 594-604. doi.org/10.1111/2041-210X.12919
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2020. Mapping foodscapes and agebrush morphotypes with unmanned aerial systems for multiple herbivores. _Landscape Ecology_, 35, 921-936. doi.org/10.1007/s10980-020-00990-1
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2018. Multi-model comparison highlights consistency in predicted effect of warming on a semi-arid shrub. _Global Change Biology_, 24, 424- 438. doi.org/10.1111/gcb.13900
* [PERSON] (2020) [PERSON], B.C., 2020. Thesis. Spectral fingerprints predict functional phenotypes of a native shrub. doi.org/10.18122/d/1715/boisestate
* [PERSON] (1983) [PERSON], 1983. Wildlife feeding and nutrition. Academic Press Inc., New York.
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2014. Plant protein and secondary metabolites influence diet selection in a mammalian specialist herbivore. _Journal of Mammology_, 95(4), 834-842. doi.org/10.1644/1-MAMAM-A-025
* [PERSON] and Eng (1975) [PERSON], [PERSON], 1975. Foods of adult sag grouse in central Montana. _The Journal of Wildlife Management_, 39(3), 628-630. doi.org/10.2307/3800409
* Western Regional Climate Center (WRCC) (2016) Western Regional Climate Center (WRCC), (2016). Cooperative climatological data summaries. wrcc.dri.edu/summary/Climmsmsid.html (21 February 2018).
|
isprs
|
SCALING UP SAGEBRUSH CHEMISTRY WITH NEAR-INFRARED SPECTROSCOPY AND UAS-ACQUIRED HYPERSPECTRAL IMAGERY
|
P. J. Olsoy, S. N. Barrett, B. C. Robb, J. S. Forbey, T. T. Caughlin, M. D. Blocker, C. Merriman, J. D. Nobler, J. L. Rachlow, L. A. Shipley, D. M. Delparte
|
https://doi.org/10.5194/isprs-archives-xliv-m-3-2021-127-2021
| 2,021
|
CC-BY
|
isprs/ec9576b6_7429_4462_b9c4_1db533b198e7.md
|
# Mapping Disturbance Dynamics in Wet Sclerophyll Forests Using Time Series Landsat
[PERSON], [PERSON], [PERSON]
EUR REDD Facility, European Forest Institute, Asia Regional Office, c/o Embassy of Finland 5 th Floor, Wisma Chinese Chamber, 258 Jalan Ampang, Kuala Lumpur 50450, Malaysia - [EMAIL_ADDRESS]
1 Laboratory of Geo-information Science and Remote Sensing, Wageningen UO Box 47, 6700 AA Wageningen, Netherlands - [EMAIL_ADDRESS]
2 Department of Forest Ecosystem Science, The University of Melbourne, 4 Water Street, Creswick, VIC 3363, Australia [EMAIL_ADDRESS]
###### Abstract
In this study, we characterised the temporal-spectral patterns associated with identifying acute-severity disturbances and low-severity disturbances between 1985 and 2011 with the objective to test whether different disturbance agents within these categories can be identified with annual Landsat time series data. We analysed a representative State forest within the Central Highlands which has been exposed to a range of disturbances over the last 30 years, including timber harvesting (clearfall, selective and thinning) and fire (wildfire and prescribed burning). We fitted spectral time series models to annual normal burn ratio (NBR) and Tasseled Cap Indices (TCI), from which we extracted a range of disturbance and recovery metrics. With these metrics, three hierarchical random forest models were trained to 1) distinguish acute-severity disturbances from low-severity disturbances; 2a) attribute the disturbance agents most likely within the low-severity class. Disturbance types (acute severity and low-severity) were successfully mapped with an overall accuracy of 72.9%, and the individual disturbance types were successfully attributed with overall accuracies ranging from 53.2% to 64.3%. Low-severity disturbance agents were successfully mapped with an overall accuracy of 80.2%, and individual agents were successfully attributed with overall accuracies ranging from 25.5% to 95.1. Acute-severity disturbance agents were successfully mapped with an overall accuracy of 95.4%, and individual agents were successfully attributed with overall accuracies ranging from 94.2% to 95.2%. Spectral metrics describing the disturbance magnitude were more important for distinguishing the disturbance agents than the post-disturance response slope. Spectral changes associated with planned burning disturbances had generally lower magnitudes than selective harvesting. This study demonstrates the potential of landsat time series mapping for fire and timber harvesting disturbances at the agent level and highlights the need for distinguishing between agents to fully capture their impacts on ecosystem processes.
Footnote †: Corresponding author
(SFHD) (Department of Environment Land Water and Planning, 2015a) and the State Logging History Database (SLHD) (Department of Environment Land Water and Planning, 2015b). These databases have employed a range of methods to document and map fire and logging disturbances. Unfortunately both of these databases have significant documented positional and distributional limitations (Department of Sustainability and Environment, 2009a; GHD, 2012; [PERSON] and [PERSON], 2015). These limitations may be overcome by mapping disturbances using Landsat time-series data. It is hoped that this will increase the knowledge and understanding of landscape-causes and consequences of both natural and anthropogenic disturbances within these forests and better inform the debate.
Previous studies have shown that Landsat's spectral bands can be used to discriminate fire severity and logging intensity in wet chlorophyll forests in South-East Australia. Victorian studies utilising Landsat for fire severity mapping (Department of Sustainability and Environment, 2009b, 2007, 2003; [PERSON] and [PERSON], 2009) or timber harvesting ([PERSON] et al., 2013; [PERSON] et al., 1994; [PERSON] and [PERSON], 1988) have used spectral information from one or two images. However approaches based on single years or binary maps are often restricted in their ability to characterise the complex dynamicsbetween wildfire, climate change and timber harvesting. Thus a more comprehensive mapping approach utilising longer time series and characterising the disturbance magnitude and duration would be beneficial.
Following the opening of the United States Geological Survey (USGS) Landsat archive and the related increase in capacity to produce time series ([PERSON] et al., 2012), Landsat time series have increasingly been used at the regional scale to map a range of disturbances (timber harvesting, wildfires and insect outbreaks) using pixel-based time-series methods ([PERSON] et al., 2014). The adoption of these techniques by Australian forest agencies has been limited. This has been partly due to the computational complexity of some of the procedures, use of proprietary software and the empirical nature of the customised requirements such as the trial and error basis for determining the optimal parameterization for the segmentation of the pixel time series (e.g. ([PERSON] et al., 2007)). Nevertheless, with increasing availability of Landsat imagery and cloud computing ([PERSON] and [PERSON], 2014), coupled with diminishing availability of skilled photo interpreters ([PERSON] and [PERSON], 2011), there is increasing interest in South-East Australia for analysing pixel time series to better understand the ecological dynamics of fire, logging and their interactions within wet sclerophyll forests.
As significant research remains to be done before fully automated landscape level forest disturbance mapping can be achieved, the general approach adopted here has been to develop a semi-automated pixel-time-series-based method which is as practical as possible. Thus the interim goal - rather than trying to replace existing databases and associated methods - should be to support them in generating more timely, consistent (temporal and spatial) and accurate products. New and/or better tools are required to produce incremental improvements in these areas. It is not necessary for these tools to provide final solutions or 100% correct results, they simply need to be tools that are useful and that can be easily corrected when things go awry. They should be simple to apply, not require expensive equipment, not substantially alter the existing mapping workflow, nor involveordinate fine-tuning by the interpreter.
Although there are a number of existing pixel-level disturbance mapping tools available in the literature for identifying forest dynamics ([PERSON] et al., 2004; [PERSON] et al., 2010; [PERSON], 2011; [PERSON] et al., 2010), they have all been developed overseas for non-eucalypt forests and significant effort is required to become familiar with these algorithms and associated proprietary software modules. As a consequence, an alternative approach that develops an integrated workflow process utilising standard (maintained) open-source software and packages was applied to this study. To ease the computational burden and storage requirements it was decided to limit the approach to utilize annual Landsat time series. The overall goal was to determine the capacity of readily available open-source tools to model spectral-temporal pixel time-series from annual Landsat time series to map fire and timber disturbance dynamics within wet sclerophyll forests in South-East Australia.
Specific objectives were to:
1. test how well fire and timber harvesting disturbances can be distinguished with annual Landsat time series and open-source software;
2. characterise the spectral-temporal pixel-time-series of fire and timber harvesting disturbances with respect to severity magnitude and spectral recovery; and
3. map the spatial and temporal pattern of fire and timber harvesting disturbances using open-source software.
## 2 Open-source software
By adopting an open-source approach for spatial data management, processing and analysis, users such as forest management agencies can benefit from freely available software products and access to source code through which new algorithms can be integrated and manipulated. The key open-source software utilised within this study are outlined below.
### Grass
The Geographical Resources Analysis Support System (GRASS) platform (GRASS Development Team, 2012) was chosen due to its popularity within the open-source community and because it fully integrates with the open-source statistical software package, R (R Development Core Team, 2012), along with the python scripting language ([PERSON], 1995). It is an open-source geographical information system (GIS) capable of handling raster, topological vector, image processing and graphic data. Released under the GNU General Public License (GPL), GRASS is developed by a multi-national group of developers and is one of the eight initial software projects of the Open Source Geospatial Foundation. GRASS has a modular structure into which may be plugged new routines programmed in a variety of languages (e.g., Python, C, shell), and there are over 300 modules and more than 100 addon modules for the creation, manipulation and visualisation of both raster and vector data. The GRASS modules are designed under the UNIX philosophy (i.e., that programs work together and handle text streams) and can be combined using scripting to create more complex or specialized modules by a user. GRASS supports an extensive range of raster and vector formats through GDAL/OGR libraries, including OGC-conformal (Open Geospatial Consortium) Simple Features for interoperability with other GIS.
R is an open-source language and software environment commonly used in research fields for statistical computing and graphics. One of the main advantages of R is its object-orientated approach, which allows results of statistical procedures to be stored as objects and used as input in further computations. R is a simple and effective formal complete programming language, and the R environment is, therefore, highly extensible. GRASS and R software can be integrated through the R package, spgrass ([PERSON], 2007), an interface allowing GRASS functions to be implemented within R code and data to be easily exchanged between the two software packages. In addition, R package, raster ([PERSON] and [PERSON], 2012), has functions for creating, reading, manipulating, and writing raster data. The package also implements raster algebra and most functions for raster data manipulations that are common in GIS.
### Python
Python is an object-orientated high-level programming language that is widely used as a scripting language in the spatial analysis environment. Python's popularity has led to the creation of many useful libraries, increasing its flexibility and interoperability, and it has well developed modules for linking with GRASS and R.
## 3 Study Area
### Geographic and biophysical characteristics
Our study area is the Toolangi State Forest and surrounding urea. This forest is located approximated 80 km north east of Melbourne, in the Victorian Central Highlands, South-East Australia. The total area of the Toolangi State forest is approximately 40,000 hectares. The total area of the study area is 180,000 hectares. A high proportion of this mountainous area supports wet sclerophyll forests, dominated by _Encelyprus regenus_ (Mountain Ash). The area was selected to represent a variety of ash-forest types, forest conditions and disturbances. As mentioned previously, these forests are currently at the centre of a debate on whether timber harvesting in the region increases fire risk and severity.
The area experiences a cool temperate climate, with mild summers and cool winters. Average annual rainfall exceeds 1200 mm over most of the area. Soils tend to be free draining, friable, brown gradational, have high water holding capacities, and have developed on a variety of volcanic parent rock materials (Department of Natural Resources and Environment, 1988).
### Natural and anthropogenic disturbances
Wildfire is the major natural disturbance associated with the study area. Several fires have occurred within the study area over the past 150 years, the most extensive being in 1926; 1939 and the recent extreme fire event of 2009 ([PERSON] and [PERSON], 2012).
The study area is also subject to intensive hardwood timber harvesting. Large-scale timber cutting, generally selective harvesting and sawmilling occurred in these forests in the latter part of the nineteenth and early twentieth centuries. Large-scale salvage operations followed major wildfires, particularly the extensive 1939 fires. Since the 1960s, clearfellming has been the major silvicultural system practised ([PERSON] et al., 1991).
## 4 Data and Methods
A general overview of the methods used in this study is shown in Figure 2. Each of the steps taken in the study are described in detail below.
### Forest population mask
Similar to [PERSON] et al. (2013) we used a forest/non-forest mask to avoid confusion between forest disturbances and other land cover dynamics. The forest/non-forest mask used was that created by [PERSON] et al. (2013).
### Landsat data and pre-processing
We downloaded all level-1 terrain-corrected (L1T) Landsat data acquired between 1 January 1984 to 28 February 2011 with cloud cover \(<90\%\) from the USGS archive for path 92 and row 086. Each image was first screened for cloud and cloud shadow using Fmask ([PERSON] and [PERSON], 2012) and converted to surface reflectance using LEDAPS for the 23 year time period ([PERSON] et al., 2006).
To minimise the effect of phenology and data gaps caused by atmospheric interference, we constructed annual anniversary-date, best observation composites using all cloud free observations within a pre-defined seasonal window, following the method of [PERSON] et al. (2010). For building the best observation composites, we defined the seasonal window as \(\pm\) 60 days around February 15.
Using the outlined selection criteria we had 100% coverage for the annual composite stack. All pre-processing steps were conducted in GRASS using a range of standard and custom modules.
### Landsat vegetation indices
In this study we utilised four indices which are responsive to different vegetation cover/disturbance properties including vegetation greenness, moisture content, canopy structure, and exposed soil signal. We generated Landsat time series stacks using the Normised Burn Ratio (NBR, Key and Benson, 2005), Tasseled Cap Wetness (TCW, Crist [PERSON], 1984), Tasseled Cap Brightness (TCB, [PERSON] and Cicone, 1984) and Tasseled Cap Angle (TCA, [PERSON] et al., 2010). The creation of
Figure 1: Study area located in Central Highlands of Victoria, Australia.
Figure 2: Flowchart outlining the main steps implemented in this study
the vegetation indices was conducted in GRASS using the _mapcalc_ function.
### Landsat time series analysis
We carried out the time series analysis using a number of standard R packages. The time series analysis was conducted to: 1) extract spectral time series for each pixel 2) statistically identify and fit structural equations and 3) extract summary information from trends.
#### 4.4.1 Extraction of time series for each pixel
Once the vegetation indices were calculated, the image stack was loaded into R using the _raster_ package ([PERSON] and [PERSON], 2012) using the _RasterStack_ function. Spectral time series were then extracted as a vector for processing using the _calc_ function within the _raster_ package. Spectral values for each year can be taken from any arbitrary window kernel centred on the pixel of interest; in this study, we chose to use the mean value in a 3 x 3 window as a compromise between spatial detail and robustness to pixel misregistration across images in the stack.
#### 4.4.2 Statistically identify and fitting trends
Once a consistent spectral time series is extracted from the image stack we used the _bfast_ package ([PERSON] et al., 2010) to fit a structural breakpoints from a linear regression model. Following previous studies using the blast package ([PERSON] et al., 2015; [PERSON] et al., 2012), we assigned a value of 0.25n to \(h\). A structural breakpoint is declared when the null hypothesis of structural stability (i.e. stability of the seasonality pattern) is rejected ([PERSON] et al., 2012; [PERSON] et al., 2005). The decision to reject this null hypothesis is based on a boundary condition which is set according to a 5% probability level following the Functional Central Limit Theorem (see ([PERSON] et al., 2000) for more information on how this boundary function is computed).
#### 4.4.3 Extract summary information from trends
Once a trend was fitted to the vegetation indices time series we derived the following set of metrics:
1. For pixels without a breakpoint detected, the slope and intercept of the linear trend of the time series was extracted
2. For pixels with a breakpoint detected, the magnitude of the breakpoint was calculated, the date of the breakpoint, the slope and intercept for the line segments before and after the breakpoint were extracted.
### Forest disturbance mapping
We followed a two-phase classification approach based on [PERSON] et al. (2015) to map spatial and temporal patterns of natural and anthropogenic disturbance: First, we classified the Landsat time series disturbance and recovery metrics into three classes: 1) acute high severity disturbances; 2) low severity disturbances; and 3) undisturbed areas. We refer to this classification phase as disturbance type classification. Second, we assigned all pixels identified within the acute high severity in the first classification phase a likelihood of being disturbed by either wildfire or clear-fell timber harvesting; we assigned all pixels identified within the low severity disturbance a likelihood of being disturbed by planned burning, selective timber harvesting, insects or drought. We refer to this classification phase as disturbance agent attribution.
#### 4.5.1 Phase one: disturbance type classification:
In the first classification phase, we used the Landsat time series disturbance and recovery metrics to classify forest changes into 1) acute high severity disturbances, 2) Low severity disturbances, and 3) undisturbed forest. High severity disturbances (such as clear-fell timber harvest and wildfires) behave differently in spectral and temporal space than low severity disturbances (such as selective harvesting and planning disturbances) which makes them distinguishable with Landsat time series. While some of the low severity disturbances can eventually lead to complete stand mortality, spectral change magnitudes associated with clear-fell timber harvesting and wildfire disturbances are usually significantly higher ([PERSON] et al., 2011). As a reference data, we randomly selected and labelled 500 pixels closely following the approach by [PERSON] et al. (2010).
For identifying and labelling disturbances in the reference pixels, we used Landsat imagery, Landsat spectral time series plots, high resolution imagery (Rapideye or GoogleEarth Imagery), the SFHD and SLHD databases.
The SFHD (1903-2015) contains polygon-level data on fire perimeter, type (wildfire or planned burning), and for a limited number fires, mapping methodology and severity information. The SLHD (1879-2015) collects polygon-level data on extent, silvicultural operation, forest type, start/end dates of logging event and mapping methodology.
[PERSON] and [PERSON] (2015) found that almost 40% of the state fire history database contained missing or incorrect information regarding the date stamps on the fires. However, they did find that recent records (2006-2014) have a higher quality, with only 7% containing missing or incorrect data. To reduce uncertainties in our analysis we only utilised data in the state fire history database that could also be linked with entries in the state bushfire ignitions point database (Deppartment of Environment Land Water and Planning, 2015c), the state planned burning ignitions database (Department of Environment Land Water and Planning, 2015d) or the County Fire Incident Reporting System (Country Fire Authority, 2015). Some of the variability in quality within the database can be attributed to the wide diversity in base data and mapping methodologies utilised (on-screen digitising using aerial photography, field GPS data capture, ground observations,, thermal line scanner mapping, automated image interpretation using Rapideye, Spot and Landsat imagery, transfer from hard copy maps). As the extreme wildfire event of 2009 covered in excess 60% of the study area, this single disturbance event has been removed from the analysis.
The state logging history database also has a range of documented positional and attributional limitations (Department of Sustainability and Environment, 2009a; GHD, 2012). The data is subject to a certain observer bias, variations in base data, and interpreter/analysis experience. It has not been uncommon to find 5-10% of the records omitted or duplicated. For the last 15-20 years the state logging history database base data has been sourced from GPS ground survey of the logging boundary. The accuracy of the resultant mapped polygon has improved but is still reliant on several factors such as GPU unit specifications, satellite positions, atmospheric conditions and natural barriers to the signal. Prior to the use of GPS, the logging boundaries were estimated from sketch mapping on 1:10000 hard copy maps.
In total, 47 pixels were identified as acute high severity disturbance, 70 pixels were identified as low severity disturbances and 363 were identified as undisturbed. A small proportion (20 pixels) could not clearly be assigned to one of these categories.
Using the reference pixels, we trained a random forest classification model ([PERSON], 2001) provided by the _randomForest_ package ([PERSON] and [PERSON], 2002) within R. The random forest model was validated using the out-of-bag confusion matrix ([PERSON], 2001), from which we estimated overall, user's and producer's accuracies, as well as errors of omission and commission.
#### 4.5.2 Phase two: disturbance agent attribution:
Following the disturbance type mapping in phase 1 (Section 4.3.1) we estimated for:
1. Each acute high severity disturbance pixel the probability of being disturbed by wildfire or clear-fell timber harvesting, respectively; and
2. Each low severity disturbance pixel the probability of being disturbed by selective timber harvesting or low severity wildfire/planned burning (for the purposes of this paper low severity wildfire and planned burning were collapsed into a single class).
Creating a continuous probability of class presence can offer greater flexibility from a forest management perspective than discrete classes ([PERSON] et al., 2006). For this purpose we calibrated two additional random forest models with additional reference datasets from the state fire and logging history databases.
#### 4.5.2.1 Attribution of acute high severity disturbance
For the acute high severity disturbance attribution, we selected all pixels covered by either wildfire or clear-fell harvesting polygons from the SFHD and SLHD databases.
#### 4.5.2.2 Attribution of low severity disturbance
For the low severity disturbance attribution, we selected all pixels covered by either planned burning or selective harvesting polygons from the SFHD and SLHD databases.
## 5 Results
### Classification of disturbance types
The disturbance classification yielded an overall accuracy of 72.9% (Table 1), 1), with the highest user's and producer's accuracies in the undisturbed class (92.7% and 77.1%, respectively), slightly lower user's and producer's accuracies for the acute severity class (56.3% and 64.3%, respectively), and moderate accuracies for the low severity class (25.5% and 53.2%, respectively). Class confusion was highest between low severity disturbance areas and undisturbed areas. In total, 14.6% of the forested area contained acute severity and 9.8% contained low severity disturbances. Most of the forested area in the study area (75.6%) was stable over the study period. The classification map (Figure 4) was used to identify acute severity and low severity areas for the following results.
The confusion matrix is derived from the out-of-bag sample of the random forest model.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \multirow{2}{*}{Class} & \multicolumn{5}{c}{Reference} \\ \cline{2-7} & \multirow{2}{*}{
### Disturbance agent attribution
The binary classification of low severity wildfire/planned burning and selective timber/thinning harvesting disturbances (using a probability threshold of \(p=0.5\)) achieved an overall accuracy of 80% (Table 2), indicating that the attribution of these two agents is much more difficult that the acute severity agents (Table 3). The user's accuracy for the selective logging was quite low, which means this disturbance agent was overestimated in the final mapped product.
## 6 Discussion
This study demonstrates the feasibility of using an open-source framework for constructing and evaluating a spectral pixel time series model and its implementation to produce an accurate operational land management agency forest disturbance map. The framework established successfully integrates freely available spatial data--pre-processed and collated in GRASS--
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline & & \multicolumn{4}{c}{Reference} \\ \cline{2-7} & Low & 5355 & 1.39 & 5474 & 97.5 & 2.5 \\ & Severity & & & & & \\ & Windfire & & & & & \\ & Planted & & & & & \\ & Buming & & & & & \\ & Selective & 1824 & 2702 & 4526 & 60.0 & 40.0 \\ & Longging & & & & & \\ & Total & 7159 & 2841 & 10000 & & \\ & Producer’s & 74.5 & 4.9 & & Overall \\ & accuracy & & & & & \\ & Error of & 25.5 & 95.1 & & 80.4 \\ & & \% & & & & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Confusion matrix for predicting disturbance agents in low severity disturbance classes
Figure 4: Map derived in the disturbance classification phase showing undisturbed areas, acute severity disturbances, and low severity disturbances.
Figure 5: Mapped probability of (a) selective harvesting and (b) low severity wildfire/planned burning.
Figure 6: Mapped probability of (a) clearfell and (b) wildfire.
into the R statistical analysis environment. After construction and validation of an spectral time series segmentation, the resulting model was implemented in GRASS using an R-GRASS interface package, _spgrass_ ([PERSON], 2007), before finally using GRASS to filter the forest prediction map and apply the minimum mapping unit of the adopted forest definition to the final forest extent spatial product.
## 7 Conclusion
In this study we characterised acute high severity and low severity disturbance in South-East Australia, using a well-established Landsat-based time series technique. From our results, we conclude that Landsat can be utilised to reliably distinguish between acute severity disturbance agents (clearlyfelling and wildfire) in our study region, using specific spectra time-series features. However, more research is needed in distinguishing between the low severity disturbance agents (low severity wildfire/planned burning and selective logging). The resulting maps and estimates offer a combined and detailed picture of disturbance dynamics in our study region through quantifying both the temporal and spatial dynamics. These otherwise unavailable spatially explicit and quality assured maps can help inform science and management needs.
## References
* [PERSON] (1994) [PERSON], 1994. Ecological disturbance and the conservative management of eucalypt forests in Australia. For. Ecol. Manage. 63, 301-346. doi:10.1016/0378-1127(94)90115-5
* [PERSON] and [PERSON] (2013) [PERSON], [PERSON], [PERSON], 2013. Mega-fires, inquiries and politics in the eucalypt forests of Victoria, south-eastern Australia. For. Ecol. Manage. 294, 45-53. doi:10.1016/j.foreco.2012.09.015
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2014. Timber Harvesting Dose Not Increase Fire Risk and Severity in Wet Enclaypt Forests of Southern Australia. Conserv. Lett. 7, 341-354. doi:10.1111/conl.12062
* [PERSON] (2007) [PERSON], 2007. Using the R-Grass interface. OSGeo J. 1, 36-38.
* [PERSON] (2001) [PERSON], 2001. Random Forests. Mach. Learn. 45, 5-32. doi:10.1023/A:1010933404324
* [PERSON] et al. (2010) [PERSON], [PERSON], 2010. Detecting trends in forest disturbance and recovery using yearly Landsat time series: 2. TimeSync -- Tools for calibration and validation. Remote Sens. Environ. 114, 2911-2924. doi:10.1016/j.res.2010.07.010
* Country Fire Authority (2015) Country Fire Authority, 2015. CFA Incident Responses [WWW Document]. URL [[https://www.data.vic.gov.au/data/dataset/cfa-incident-responses](https://www.data.vic.gov.au/data/dataset/cfa-incident-responses)]([https://www.data.vic.gov.au/data/dataset/cfa-incident-responses](https://www.data.vic.gov.au/data/dataset/cfa-incident-responses)) (accessed 11.28.15).
* [PERSON] (1984) [PERSON], [PERSON], 1984. A physically-based transformation of Thematic Mapper data.--The TM Tasseled Cap. Geosci: Remote Sensing, IEEE Trans. 256-263.
* Department of Environment Land Water and Planning (2015a) Department of Environment Land Water and Planning, 2015a. Fire history overlay of most recent fires [WWW Document]. URL [[https://www.data.vic.gov.au/catalogue/metadata?anzlclId-ANZV10803004774&publicId-guest&extractionProviderId=1](https://www.data.vic.gov.au/catalogue/metadata?anzlclId-ANZV10803004774&publicId-guest&extractionProviderId=1)]([https://www.data.vic.gov.au/catalogue/metadata?anzlclId-ANZV10803004774&publicId-guest&extractionProviderId=1](https://www.data.vic.gov.au/catalogue/metadata?anzlclId-ANZV10803004774&publicId-guest&extractionProviderId=1)) (accessed 11.28.15).
* Department of Environment Land Water and Planning (2015b) Department of Environment Land Water and Planning, 2015b. Logging history overlay of most recent harvesting activities [WWW Document].
* Department of Environment Land Water and Planning (2015c) Department of Environment Land Water and Planning (2015d) Department of Natural Resources and Environment, 1988. Forest management plan for the Central Highlands. Melbourne, Victoria.
* State Forests 2008-09. Melbourne, Victoria.
* Department of Sustainability and Environment (2009b) Victorian Bushfires Severity Map 2009 (Polygons) [WWW Document]. URL [[http://services.land.vic.gov.au/catalogue/metadata?anzlclId-ANZV10803003677&publicId=guest&extractionProviderId=1](http://services.land.vic.gov.au/catalogue/metadata?anzlclId-ANZV10803003677&publicId=guest&extractionProviderId=1)]([http://services.land.vic.gov.au/catalogue/metadata?anzlclId-ANZV10803003677&publicId=guest&extractionProviderId=1](http://services.land.vic.gov.au/catalogue/metadata?anzlclId-ANZV10803003677&publicId=guest&extractionProviderId=1)) (accessed 11.28.15).
* fire severity classification Landsat data for the 200607 fire [WWW Document]. URL [[https://www.data.vic.gov.au/data/dataset/victoria-bushfires-severity-map-2007-polygons](https://www.data.vic.gov.au/data/dataset/victoria-bushfires-severity-map-2007-polygons)]([https://www.data.vic.gov.au/data/dataset/victoria-bushfires-severity-map-2007-polygons](https://www.data.vic.gov.au/data/dataset/victoria-bushfires-severity-map-2007-polygons)) (accessed 11.28.15).
* Department of Sustainability and Environment (2003) Department of Sustainability and Environment, 2003. Fire Severity Classes (Landsat) for Alpine fires January/February 2003 [WWW Document].
Victoria, Australia, using Landsat TM and SPOT 4/5 Satellites., in: IUFRO Division 4.01 Conference Meeting Multiple Demands for Forest Information: New Technologies in Forest Data Gathering, 17 - 20 August 2009. Mt Gambier, South Australia.
* [PERSON] (2011) [PERSON], [PERSON], 2011. Semi-automating the Stand Delineation Process in Mapping Natural Eucalypt Forests. Aust. For. 74, 13.
* [PERSON] (2012) [PERSON], [PERSON], 2012. raster: Geographic Data Analysis and Modeling [WWW Document]. R Package, version 2.3-33. URL URL [[http://CRAN.R-project.org/package=raster](http://CRAN.R-project.org/package=raster)]([http://CRAN.R-project.org/package=raster](http://CRAN.R-project.org/package=raster)). (accessed 11.14.15).
* [PERSON] et al. (2004) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2004. VegMachine-Delivering monitoring information to northern Australia's pastoral industry, in: Proceedings 12 th Australasian Remote Sensing and Photogrammetry Conference, Frematule Western Australia, October 2004.
* [PERSON] et al. (2007) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2007. Trajectory-based change detection for automated characterization of forest disturbance dynamics. Remote Sens. Environ. 110, 370-386. doi:10.1016/j.rse.2007.03.010
* [PERSON] et al. (2010) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2010. Detecting trends in forest disturbance and recovery using yearly Landsat time series: 1. LandTrendr -- Temporal segmentation algorithms. Remote Sens. Environ. 114, 2897-2910. doi:10.1016/j.rse.2010.07.008
* [PERSON] and [PERSON] (2005) [PERSON], Benson, N.C., 2005. Landscape assessment: remote sensing of severity, the normalized burn ratio and ground measure of severity, the composite burn index. FIREMON Fire Eff. Monit. Invent. Syst. Ogden, Utah USDA For. Serv. Rocky Mt. Res. Sun.
* [PERSON] et al. (2013) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2013. Forest cover trends from time series Landsat data for the Australian continent. Int. J. Appl. Earth Obs. Geoinf. 21, 453-462. doi:10.1016/j.jag.2012.06.005
* [PERSON] and [PERSON] (2002) [PERSON], [PERSON], 2002. Classification and Regression by randomForest. R News 2, 18-22.
* [PERSON] (2010) [PERSON], 2010. Forest logging creates fire traps. Aust. Sci. 31, 38.
* [PERSON] et al. (2011) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2011. Newly discovered landscape traps produce regime shifts in wet forests. Proc. Natl. Acad. Sci. U. S. A. 108, 15887-91. doi:10.1073/pnas.1110245108
* [PERSON] et al. (2009) [PERSON], [PERSON], [PERSON], [PERSON], 2009. Effects of logging on fire regimes in moist forests. Conserv. Lett. 2, 271-277. doi:10.1111/j.1755-263X.2009.00080.x
* [PERSON] et al. (2006) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2006. A Landsat surface reflectance dataset for North America, 1990-2000. Trans. Geosci. Remote Sens. Lett. 3, 68-72.
* [PERSON] et al. (2013) [PERSON], [PERSON] [PERSON], [PERSON], [PERSON], 2013. The Performance of Random Forests in an Operational Setting for Large Area Sclerophyll Forest Classification. Remote Sens. 5, 2838-2856. doi:10.3390/rs5062838
* [PERSON] et al. (1994) [PERSON], [PERSON], [PERSON], 1994. An economic analysis of the use of satellite imagery in mapping tree cover across Victoria, in: 38 th Annual Conference of the Australian Agricultural Economics Society Wellington, New Zealand, 8-10 February 1994. Australian Agricultural and Resource Economics Society.
* [PERSON] and [PERSON] (2015) [PERSON], [PERSON] [PERSON], 2015. Bushfire Spatial Data Models and Ignition Data Project, in: AFAC: New Directions in Emergency Management: 1-3 September 2015. Adelaide.
* [PERSON] et al. (2010) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2010. Quantification of live aboveground forest biomass dynamics with Landsat time-series and field inventory data: A comparison of empirical modeling approaches. Remote Sens. Environ. 114, 1053-1068. doi:10.1016/j.rse.2009.12.018
* [PERSON] and Bradstock (2012) [PERSON], [PERSON], [PERSON], R.A., 2012. The efficacy of fuel treatment in mitigating property loss during wildfires: Insights from analysis of the severity of the catastrophic fires in 2009 in Victoria, Australia. J. Environ. Manage. 113, 146-57. doi:10.1016/j.jenvman.2012.08.041
* R Development Core Team (2012) R Development Core Team, 2012. R: A Language and Environment for Statistical Computing; R Foundation for Statistical Computing: Vienna, Austria [WWW Document].
* [PERSON] et al. (2011) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2011. Mapping wildfire and clearcut harvest disturbances in boreal forests with Landsat time series data. Remote Sens. Environ. 115, 1421-1433. doi:10.1016/j.rse.2011.01.022
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON] [PERSON], [PERSON], 2015. Characterizing spectral-temporal patterns of defolator and bark beetle disturbances using Landsat time series. Remote Sens. Environ. 170, 166-177. doi:10.1016/j.rse.2015.09.019
* [PERSON] et al. (1991) [PERSON], [PERSON], [PERSON], [PERSON], 1991. The mountain ash forests of Victoria: ecology, silviculture and management for wood production, in: [PERSON], [PERSON], [PERSON]. (Eds.), Forest Management in Australia. Surrey Beatty and Sons, Chipping Norton, pp. 38-57.
* [PERSON] (2011) [PERSON], 2011. TimeStats: A Software Tool for the Retrieval of Temporal Patterns From Global Satellite Archives. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 4, 310-317. doi:10.1109/JSTARS.2010.2051942
* [PERSON] (1995) [PERSON], 1995. Python tutorial, Technical Report CS-R9526. Amsterdam.
* [PERSON] et al. (2010) [PERSON], [PERSON], [PERSON], [PERSON], 2010. Detecting trend and seasonal changes in satellite image time series. Remote Sens. Environ. 114, 106-115. doi:10.1016/j.rse.2009.08.014
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2014. Pixel-Based Image Compositing for Large-Area Dense Time Series Applicationsand Science. Can. J. Remote Sens. 40, 192-212. doi:10.1080/07038992.2014.945827
* [PERSON] (1988) [PERSON], [PERSON], 1988. Forest Cover Changes in Victoria, 1869-1987. Melbourne.
* [PERSON] and [PERSON] (2014) [PERSON], [PERSON], [PERSON], N.C., 2014. Satellites: Make Earth observations open access. Nature 513, 30-1. doi:10.1038/513030a
* [PERSON] et al. (2012) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2012. Opening the archive: How free data has enabled the science and monitoring promise of Landsat. Remote Sens. Environ. 122, 2-10. doi:10.1016/j.rse.2012.01.010
* [PERSON] et al. (2006) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], N.C., 2006. Estimating the probability of mountain pine beetle red-attack damage. Remote Sens. Environ. 101, 150-166. doi:10.1016/j.rse.2005.12.010
* [PERSON] and [PERSON] (2012) [PERSON], [PERSON], [PERSON], [PERSON], 2012. Object-based cloud and cloud shadow detection in Landsat imagery. Remote Sens. Environ. 118, 83-94. doi:10.1016/j.rse.2011.10.028
|
isprs
|
MAPPING DISTURBANCE DYNAMICS IN WET SCLEROPHYLL FORESTS USING TIME SERIES LANDSAT
|
A. Haywood, J. Verbesselt, P. J. Baker
|
https://doi.org/10.5194/isprs-archives-xli-b8-633-2016
| 2,016
|
CC-BY
|
isprs/a0f72ea6_d061_4f38_b427_0ba3e684be37.md
|
Significance of Remote Sensing based Precipitation and Terrain Information for Improved Hydrological and Hydrodynamic Simulation in Parts of Himalayan River Basins
[PERSON]
1 Indian Institute of Remote Sensing, ISRO, 248001, 4-Kalidas Road, Dehradun, Uttarakhand, India - (praveen, pdh, spa, bhaskarnikam, vaibhav, arpit, prakash)@jur.gov.hk
[PERSON]
1 Indian Institute of Remote Sensing, ISRO, 248001, 4-Kalidas Road, Dehradun, Uttarakhand, India - (praveen, pdh, spa, bhaskarnikam, vaibhav, arpit, prakash)@jur.gov.hk
[PERSON]
1 Indian Institute of Remote Sensing, ISRO, 248001, 4-Kalidas Road, Dehradun, Uttarakhand, India - (praveen, pdh, spa, bhaskarnikam, vaibhav, arpit, prakash)@jur.gov.hk
[PERSON]
1 Indian Institute of Remote Sensing, ISRO, 248001, 4-Kalidas Road, Dehradun, Uttarakhand, India - (praveen, pdh, spa, bhaskarnikam, vaibhav, arpit, prakash)@jur.gov.hk
[PERSON]
1 Indian Institute of Remote Sensing, ISRO, 248001, 4-Kalidas Road, Dehradun, Uttarakhand, India - (praveen, pdh, spa, bhaskarnikam, vaibhav, arpit, prakash)@jur.gov.hk
[PERSON]
1 Indian Institute of Remote Sensing, ISRO, 248001, 4-Kalidas Road, Dehradun, Uttarakhand, India - (praveen, pdh, spa, bhaskarnikam, vaibhav, arpit, prakash)@jur.gov.hk
[PERSON]
1 Indian Institute of Remote Sensing, ISRO, 248001, 4-Kalidas Road, Dehradun, Uttarakhand, India - (praveen, pdh, spa, bhaskarnikam, vaibhav, arpit, prakash)@jur.gov.hk
[PERSON]
1 Indian Institute of Remote Sensing, ISRO, 248001, 4-Kalidas Road, Dehradun, Uttarakhand, India - (praveen, pdh, spa, bhaskarnikam, vaibhav, arpit, prakash)@jur.
1997; [PERSON] et al., 2014; [PERSON] et al., 2014a; [PERSON] et al., 2015; [PERSON], 2017; [PERSON] and [PERSON], 2019; [PERSON] et al., 2019) after occurrence of each major flood event (figures 14, 19 and 20 in appendix-1).
### Objectives
Therefore, the present work have been taken as part of ISRO sponsored project on flood early warning system for North West Himalaya (NWH). The main objective of the present work are to evaluate the weather research and forecasting (WRF) model with Indian meteorological department (IMD) and satellite based tropical rainfall measuring mission (TRMM) and global precipitation mission (GPM) gridded rainfall datasets and quantify there impact on hydrological models. The other objective is to evaluate the various DEM for watershed hydrology, flood prone area mapping and river hydraulics study especially before and after major flood event of 2013. Lastly, glacier lakes are mapped for few river basins of Himalaya and GLOF modeling is done for few GLOF susceptible lakes of Eastern Himalaya.
## 2 Material and Methods
### Study area
The study area for this work are Himalayan river basins such as Upper Ganga, Beas, Sutlej, Teesta, Koshi (in Nepal) etc. The NWH region and its selected river basins (figure 1) are used for extensive rainfall analysis, hydrological/hydodynamic (HD) and DEM based erosion/deposition study. The Koshi and Teesta river basins (figure 3) are mainly used for glacier lake related studies.
### Material and data used
The main data types used in this work are, ground observation data from hydro-meteorological stations, remote sensing based data of rainfall, terrain and multi-temporal multi-spectral images and weather forecast and climate data. The rainfall data from rain gauges and automatic weather stations of IMD and IIRSRO. The discharge data for major rivers from Bhakra Baes Management Board (BBMB) for Beas and Sutlej river basins and Central Water Commission (CWC) for Ganga and Teesta river basins. The satellite based rainfall data is primarily taken from Hydro Estimator Model (HEM), Tropical Rainfall Measuring Mission (TRMM) and Global Precipitation Measurement (GPM) gridded rainfall products. The elevation data from CaroDEM v3 from NRSC, Shuttle Radar Topography Mission (SRTM) 30m, ASTER global 30 m DEM, ALOS 30m DSM, Tandem-x 90m, ALOS-PASAR 12.5m DEMS and MERT 90m DEM are used. High Mountain Asia (IMA) 8-meret Digital Elevation Models (DEMs) from NASA's National Snow and Ice Data Center Distributed Active Archive Center (NSIDC DAAC) were utilized for the pre (20 March 2012) and post (17 March 2015) June 2013 floods to quantify the flood simulation, river morphological and topographical changes for Alakhnanad rivers downstream of Badrinath. The mapping and monitoring of glacier lakes is done using the time series of Landsat data from United States Geological Survey (USGS) and Cartosat-1 data from national remote sensing centre (NRSC).
### Methodology
The methods used in this work is divided into three parts. First, method of rainfall data comparisons and hydrological modeling are briefly given. Next, the terrain data analyses and hydrodynamic modeling methods are provided. Finally, the method for Glacier Lakes mapping, monitoring and modeling are given.
#### 2.3.1 Methods for satellite based rainfall comparison and hydrological modeling
A comparative study is also done to check the validation of forecasted data with different meteorological data. For comparative study different parameters are calculated using the contingency table (table 1). The parameters that are calculated for comparative study are given below:
\begin{tabular}{|c|c|c|c|} \hline
**2*2** & \multicolumn{2}{c|}{Cunigency Table} & \multicolumn{2}{c|}{Event Observed} \\ \hline & & Yes & No \\ \hline Event & Yes & a (hits) & b (false alarms) \\ Forecast & No & c (misses) & d (correct negatives) \\ \hline \end{tabular}
\end{table}
Table 1: Contingency table for rainfall statistical computation
Figure 1: Overall study area in NWH with major River basins and Beas & Sutlej highlighted having good ground data.
Figure 2: Koshi River basin of East Nepal for glacier lakes mapping and monitoring study & Teesta river basins of Sikikim for glacier lakes mapping, monitoring & modeling study.
watershed based Hydrological Modeling System (HMS) hydrological model (HEC-HMS, 2010) or at 2.5 km grid scale using fully distributed variable infiltration capacity (VIC) model ([PERSON] et al., 1997). The basic maps like land use land cover (LULC), soil, DEM are also used in both hydrological models for creating necessary input layers ([PERSON], 2018). The watershed and terrain processing in done using HEC-GeoHMS tool in ArcGIS environment for watershed delineation, stream network pattern and catchment characteristics estimation. The calibration and validation of both the models is done for Beas, Sutlej, Jhelum and Upper Ganga basins.
#### 2.3.2 Methods for satellite based terrain data and hydrodynamic modeling
The elevation accuracy of various DEMs are accessed GNSS based ground control points in parts of upper Ganga basin. The comparison of DEM based delineated watershed and rivers is done using Sentinel-2, Landstat-8 and Google Earth images. For the river morphology study, the Post-Event and Pre-Event DEMs were subtracted using the \"Raster Calculator\" in GIS environment for quantifying the erosion/deposition before and after 2013 flood event of Uttarakhand. By their subtraction, positive and negative digital values were generated corresponding to each pixel (figure 3).
The stream parameters and river cross sections are derived using ALOS 12.5 and CartoDEMs in HEC-GeoRAS tool of ArcGIS and Mike hydro tool of Mike-11 HD model. The generated river network and cross-sections are used in Mike-11 HD model along with boundary condition and HD parameter settings to simulate the river flows due to heavy rainfall events based flash floods (flood hydrographs as input boundary condition are output of hydrological model) GLOF (glacier lake breach scenarios) events ([PERSON] et al., 2016, 2017). Evaluation of terrain data from DEMs is also utilized in the Height Above Nearest Drainage (HAND) tool ([PERSON] et al., 2008) to identify the flood prone areas of NWH region.
#### 2.3.3 GLOF mapping, monitoring and modeling
The mapping of glacier lakes is done using Normalized Difference Water Index (NDWI), which delineates open water features and enhances their presence in the remotely sensed digital imagies like Landsat, Resourceset and Sentienl-2. The NDWI makes use of reflected near-infrared radiation and visible green light to enhance the presence of water bodies ([PERSON], 1996). The NDWI map time series to map glacier lakes are generated using Google Earth Engine ([PERSON] et al., 2017).
The susceptible glacier lakes are identified using multi-criteria methods ([PERSON] et al., 2017). The modeling of critical GLOF susceptible lake is accomplished using one dimensional 1-D HD model ([PERSON] et al., 2016).
## 3 Results and discussion
The results and discussion are arranged in three sub-sections. In the first sub-section, inter comparison of forecasted, IMD satellite based gridded rainfall products is given along with its use and impact on hydrological simulations in selected NWH river basins. In 2\({}^{\text{rd}}\) part, results of DEM accuracy assessment, terrain, watershed parameters, erosion/deposition from flood events and impact on river cross-sections and profiles is presented. In final part, glacier lake mapping and modeling results are presented.
### Results of satellite based rainfall and its impact on hydrological modeling
The satellite based precipitation products such as Tropical rainfall measuring mission, TRMM and global precipitation measurement mission, GPM are first compared with gridded rainfall data from IMD (figure 6), weather forecast models and limited ground based automatic rain gauge datasets and later used for generating flood hydrographs using hydrological models.
The susceptible glacier lakes are identified using multi-criteria methods ([PERSON] et al., 2017). The modeling of critical GLOF susceptible lake is accomplished using one dimensional 1-D HD model ([PERSON] et al., 2016).
## 4 Results and discussion
The results and discussion are arranged in three sub-sections. In the first sub-section, inter comparison of forecasted, IMD satellite based gridded rainfall products is given along with its use and impact on hydrological simulations in selected NWH river basins. In 2\({}^{\text{rd}}\) part, results of DEM accuracy assessment, terrain, watershed parameters, erosion/deposition from flood events and impact on river cross-sections and profiles is presented. In final part, glacier lake mapping and modeling results are presented.
### Results of satellite based rainfall and its impact on hydrological modeling
The satellite based precipitation products such as Tropical rainfall measuring mission, TRMM and global precipitation measurement mission, GPM are first compared with gridded rainfall data from IMD (figure 6), weather forecast models and limited ground based automatic rain gauge datasets and later used for generating flood hydrographs using hydrological models.
Figure 4: Glacier lake mapping using NDWI method and GEE.
Figure 3: DEM differencing method for river erosion/deposition study using pre-post flood event DEM data.
Figure 6: RMSE maps for 2017 Monsson for GPM, HEM and TRMM rainfall data as compared with IMD datasets.
The satellite based rainfall have mostly shown under prediction in the study area and few places have are also showing over estimation of rainfall. The GPM, TRMM errors are comparable which shows high difference in rainfall in lower and middle elevation ranges of NWH, whereas, HEM data more mismatch in Tibet region of Sutlej and Indus basins (figure 6). Overall at basin scale, comparative maps of different parameters shows that WRF data is well suited for rainfall forecasts, provided proper physics options and elevation option are chosen (table 2 and figure 7, [PERSON] et al., 2018; [PERSON] and [PERSON], 2020).
When this rainfall data from satellites and WRF model are used for hydrological simulation in Upper Ganga basin, part of Alakhandana basin showed over estimation/prediction of rainfall resulting in high runoff generation, but for the Beas basin difference between satellite and ground based rainfall was less, which resulted in high accuracy of simulated flood hydrographs. Hydrological modeling results were best for Beas basin, followed by Upper Ganga basin and were least matching for Sutlej basin. The HMS model (with rainfall from TRMM and ground based AWS network) was used for simulating the 3 hourly flood event of August 2014 and 2015 in Beas basin and gave high accuracy in terms of R\({}^{2}\) of 0.93 and 0.89 resp.
### Results of satellite based terrain data and its impact on hydrodynamic modeling
Limited ground truth using GNSS measurements showed that ALOS-PASAR 12.5 DEM (mean bias of 0.285 m) followed by carto DEM version 3.1 (mean bias of 1.65 m) and ALOS 30 m (mean bias of 2.086 m) are found to be most accurate as compared to other other ozone DEM such as MERIT, SRTM, and ASTER DEMs. The watershed delineation by ASTER 30m (2011), Tandem-x 90m (2016) and SRTM 90 DEM (2000) for Koshi basin showed slightly different watershed areas (57680.8 km; 57639.90 km; and 57658.1 km; respectively. There was no change in number of sub-watersheds and streams in all three DEMs, but river length varied in each DEM, mainly due to shifting and erosion/deposition of river course and bed material between different DEMs.
Similarly, the major erosion and deposition was found in catchments of river Bhagirathi, Alakhandana, Gori Ganga and Yamuna in Uttarakhand state and Beas and Sutlej river catchments in Himachal Pradesh using HMA and other terrain datasets. The river cross-section and longitudinal profile data showed that river cross sections before and after 2013 floods have changed drastically in many river stretches of upper Ganga and parts of Sutlej river basins. Similarly, flood events of 2014, 2015, 2016 and 2018 have impacted Beas River and its tributaries cross sections and longitudinal profiles as well. The results of HMA 8 m DEM (post flood, 2015 time) and Aster 30 m DEM (pre flood, 2011 time) on river bed is shown in figure 9, showing areas of erosion and deposition. The sediment deposition area is much higher than the eroded area in this part river stretch.
The changed river bed profile and cross sections have significant impact on the HD model results in terms of change in water level and discharge for a given peak flow. In some areas, water carrying capacity of river channel have increased due to erosion and in some places it has shown significant decrease due to sediment deposition.
As ALOS-PALSAR 12.5 DEM was found to be most accurate in NWH region, this DEM was used for generating topography based flood prone areas using HAND tool in ArcGIS. The initial results are shown in figure 10 for NWH region and Beas basin upto Pandoh dam.
### GLOF mapping, monitoring and modeling
The spatio-temporal variation and evolution of glacier lakes was for lakes of Upper Chenab, Upper Ganga, Upper Teseta and Koshi river basin was done using time series of RS data from Landsat, Sentinel-1 and Google earth images. Koshi basin of Nepal covering Everest and surrounding region have shown largest increase in the glacier lakes in last 40 years, followed by glacier lakes of Upper Teseta, Chenab and Ganga river basins. Some of these lakes are highly susceptible for glacier lake outburst floods, GLOF. The GLOF modeling at various lake break scenarios for some of highly vulnerable and large glacier lakes was done using 1-D HD models, with inputs from ALOS 12.5 and Carto-v3.1 DEM.
The 1-D HD model simulations (using SRTM 30 m DEM and scenario of 20 m of glacier lake breach) for upper Thangu cascade lakes of North Sikkim showed that peak flow 8548 m\({}^{3}\)/s at the lake site within 39 min., followed by peak flow of 7541 and 6147 m\({}^{3}\)/s at downstream of army base camp and Thangu village within 3 min and 12 min, respectively ([PERSON] et al., 2017). Similarly GLOF simulations for Chubda glacier lake of Bhutan and Gapang glacier lake of H.P., India was completed using Mike-11 1-D HD model for various lake breach scenario. The figure 13 shows the Chubda glacier lake, river cross-sections and simulated flood inundation at one of the vulnerable road, human settlement and airport site situated downstream of lake on left river bank (Jakar town). In both these cases also SRTM-30 m DEM was used to create river cross-sections. The simulated peak flow at lake sites varied from 17,329 cumce to 34,658 cumce for lake depth of 10 to 20 m (area of lake is 1.38 km\({}^{2}\) in 2016, and volume of 5.54 Mm\({}^{3}\)), with 40 mins of flood duration time at lake site. This value will further reduce to 10, 500 cumce if flood duration at lake site is increased to 2-3 hrs.
## 4 Conclusions and Future Score
The present work has highlighted the importance of satellite derived precipitation and terrain information for mountainous catchments of Himalaya. The GPM, HEM and TRMM data can be used along with forecasted WRF data in NWH region with certain degree of confidence. The spatial pattern of satellite and forecasted rainfall data matched more than the single point based AWS based rainfall values. Further improvements, specific to Himalayan region is needed in both satellite and numerical weather prediction models, so as to improve their utility in flood simulation and hydrological forecasting models. In hydrological models, the event based HMS model has given better results for simulating the hourly flood peak as compared to the daily grid based VIC model.
The terrain data given by various global and regional DEM is of good accuracy for operational watershed terrain parameter generation, hydrological simulations and 1-D HD simulations. But the same data, may not be sufficient for temporal time series analysis of pre and post flood event river morphological studies. In that case, high resolution (HR) DEMs such as HMA 8m and ALOS 12.5 m and Carto-10 m DEM shows better potential. In future, use of drones, Icesat-2 and other HR optical and radar/LIDAR based DEM should be generated for critical river and glacier lakes of Himalaya.
In case of glacier lakes, regular monitoring of all susceptible glaciers is most important, as many of these lakes have breached in past ([PERSON] et al., 2018), and have potential to breach again, due to dynamic hydro-glactic-climatic and extreme weather events of Himalaya. At the same time, satellite or ground based glacier lake depth or volume estimate and high resolution terrain data is most critical to get the improved GLOF scenarios. In future use of more open source 1-D and 2-D HD models can be explored in such high relief mountain regions.
## Acknowledgements
Authors acknowledge the support from Chairman, Indian Space Research Organization (ISRO) for their kind support to complete this research work. This work is done as part of Indian Space Research Organization (ISRO) funded Disaster Management Support (DMS) Research and development (R&D) project- \"Remote sensing ground observations and integrated modelling based early warning system for climatic extremes of North-West Himalayan region\". Satellite data based HMA DEM product was download from National Snow and Ice Data Centre (NSIDC), Carto-dem product was provided from NRSC, Landsat and Sentinel-1 data from USGS Earth Explorer, global rainfall products are taken from NASA and JAXA. Ground data on river flow was provided by BBMB and meteorological data grids were provided by IMD. AWS and rainfall was taken from ISRO's hydro-meteorological stations. WRF model is provided by NCAR.
## Appendix - I
The appendix give information about location of IIRS-AWS sites in NWH, basic model configuration, outputs of models simulations and few field photographs.
## References
* [PERSON] et al. (2012) [PERSON] [PERSON], [PERSON] [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], [PERSON], & [PERSON], 2012. Analyzing the operational performance of the hydrological models in an alpine flood forecasting system. _Journal of Hydrology_, 412-413, 90-100. doi:10.1016/j.jhydrol.2011.07.047.
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON] and [PERSON], 2017. Inventory and recently increasing GLOF susceptibility of glacial lakes in Sikkim, Eastern Himalaya. _Geomorphology_, 295: 39-54.
* [PERSON] et al. (2020) [PERSON], [PERSON] and [PERSON], 2020. Rainfall over the Himalayan foot-hill region: Present and future. _J. Earth Syst. Sci._ 129(11). 1-16.
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON] and [PERSON], 2016. Spatio-temporal characteristics of extreme rainfall events over the Northwest Himalaya using satellite data. _Int J Climatol._ 36, 3949-3962.
* [PERSON] and [PERSON] (2005) [PERSON] and [PERSON], 2005. Characteristics of Monsoon Rainfall around the Himalayas Revealed by TRMM Precipitation Radar. _Monthly weather review_, 133, 149-165.
* Bookhagen and Burbank (2006) [PERSON] and Burbank, D.W., 2006. Topography, relief, and TRMM-derived rainfall variations along the Himalaya. _Geophys. Res. Lett._, 33, L08405.
* Bookhagen and Burbank (2010) [PERSON] and Burbank, D.W., 2010. Toward a complete Himalayan hydrological budget: Spatiotemporal distribution of snowmelt and rainfall and their impact on river discharge. _J. Geophys. Res. Earth Surf._, 115, F03019.
* [PERSON] et al. (2012) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], 2012. The state and fate of Himalayan Glaciers. _Science_, 336, 310-314. doi:10.1126/science.1215828.
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON] (Eds.), 2020. _Himalayan Weather and Climate and their Impact on the Environment_, _C Springer Nature Switzerland AG 2020_, 1 st ed. 2020, XIV, 577 _[[https://doi.org/10.1007/978-3-030-29684-1_](https://doi.org/10.1007/978-3-030-29684-1_)]([https://doi.org/10.1007/978-3-030-29684-1_](https://doi.org/10.1007/978-3-030-29684-1_)). ISBN 978-3-030-29683-4.
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON] and [PERSON], 2015. Prediction of flash flood hazard impact from Himalayan river profiles. _Geophys. Res. Lett._, 42, 5888-5894.
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON], 2018. Experimental Flood Early Warning System in Parts of Beas Basin Using Integration of Weather Forecasting, Hydrological And Hydrodynamic Models. _Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci._, XLI-1, 221-225, [[https://doi.org/10.5194/ispsr-archives-XLI-5-221-2018](https://doi.org/10.5194/ispsr-archives-XLI-5-221-2018)]([https://doi.org/10.5194/ispsr-archives-XLI-5-221-2018](https://doi.org/10.5194/ispsr-archives-XLI-5-221-2018)).
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], 2019. Flood simulation prediction for extreme flood events: a case study of Tirthan River, North West Himalaya. _Himalayan Geology_, 40(2), 128 140.
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON].,
Figure 19: Extensive river bed and channel erosion at downstream of Rambara, after June 2013 Kedarnath flood event (Picture by Dr [PERSON], Oct., 2013 flood survey).
Figure 20: Extensive river bed and channel erosion along with landslides at upstream of Rambara, after June 2013 Kedarnath flood event (Picture by Dr [PERSON], Oct., 2013 flood survey).
Figure 17: Hydrological and 1-D HD model setup and simulation results of few sites in Beas basin and Upper Ganga river basin* [PERSON] et al. (2007) [PERSON], [PERSON], [PERSON], 2007. The Shuttle Radar Topography 30 Mission, _Rev Geophys_, 45(2), 583-585.
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], 2017. Google earth engine: planetary-scale geospatial analysis for everyone. _Remote Sens. Environ._, 202, 18-27.
* Hydrologic Modeling System HEC-HMS. (2010) Hydrologic Modeling System HEC-HMS. (2010) Quick Start Guide CPD-74D. Davis, CA: US Army Corps of Engineers. Retrieved from [[http://www.hec.usac.arny.mil/software/hec-hms/documentation/HEC-HMS_QuickStart_Guide_3.5.pdf](http://www.hec.usac.arny.mil/software/hec-hms/documentation/HEC-HMS_QuickStart_Guide_3.5.pdf)]([http://www.hec.usac.arny.mil/software/hec-hms/documentation/HEC-HMS_QuickStart_Guide_3.5.pdf](http://www.hec.usac.arny.mil/software/hec-hms/documentation/HEC-HMS_QuickStart_Guide_3.5.pdf)).
* [PERSON] and [PERSON] (1997) [PERSON] and [PERSON], 1997. Flood hydrology and geomorphology of monsoon dominated rivers; the Indian Peninsula. _Water International_, 22(4), pp 259-265.
* [PERSON] et al. (2018) [PERSON], [PERSON] and [PERSON], 2018. Precipitation pattern in the Western Himalayas revealed by four datasets. _Hyrold. Earth Syst. Sci._, 22, 5097-5110.
* [PERSON] (1996) [PERSON], S.K., 1996. The use of the normalized difference water index (NDWI) in the delineation of open water features. _Int. J. Remote Sens._ 17, 1425-1432.
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON] and [PERSON], 2019. Acceleration of ice loss across the Himalayas over the past 40 years. _Sci. Adv._ (5), eaav7266, 1-12.
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON] and [PERSON], 2015. Himalayan waters at the crossroads: issues and challenges. _International Journal of Water Resources Development_, 31(2), 151-160.
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], 2018. An inventory of historical glacial lake outburst floods in the Himalayas based on remote sensing observations and geomorphological analysis, Geomorphology, 308, 91-106,
* [PERSON] and [PERSON] (2020) [PERSON] and [PERSON], 2020. Topographic sensitivity of WRF-simulated rainfall patterns over the North West Himalayan region. Atmospheric Research 242, 105003, 1-15.
* a case study of River Jayanti, West Bengal, India, _Geomatics, Natural Hazards and Risk_, 10(1), 1928-1947.
* [PERSON] et al. (2008) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] and Waterloo, M.J., 2008. HAND, a new terrain descriptor using SRTMDEM: mapping terrain-firme rainforest environments in Amazonia. _Remote Sens Environ._, 112(9), 3469-3481.
* [PERSON] (2018) [PERSON], 2018. Flood early warning system development in North Western Himalaya. _Master of Technology (M.Tech.) in Remote Sensing and GIS project thesis. Water Resources Department, IIRS (ISRO), 2016-18_, 138 pages.
* [PERSON] et al. (2014a) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON], 2014a. GIS-based morpho-tectonic studies of Alaknanda river basin: A precursor for hazard zonation. _Natural Hazards_. [[https://doi.org/10.1007/s11069-013-0953-y](https://doi.org/10.1007/s11069-013-0953-y)]([https://doi.org/10.1007/s11069-013-0953-y](https://doi.org/10.1007/s11069-013-0953-y)).
* [PERSON] et al. (2014b) [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], 2014b. Satellite-based estimation and validation of monthly rainfall distribution over Upper Ganga river basin. _Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci._, 40, 399.
* [PERSON] (2014) [PERSON], 2014. Surface processes during flash floods in the gleactical terrain of Kedamath, Garhwal Himalaya and their role in the modification of landforms. _Current science_, 106(4), 594-597.
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2015. Impacts of DEM resolution, source, and resampling technique on SWAT-simulated streamflow. _._.
|
isprs
|
SIGNIFICANCE OF REMOTE SENSING BASED PRECIPITATION AND TERRAIN INFORMATION FOR IMPROVED HYDROLOGICAL AND HYDRODYNAMIC SIMULATION IN PARTS OF HIMALAYAN RIVER BASINS
|
P. K. Thakur, P. R. Dhote, A. Roy, S. P. Aggarwal, B. R. Nikam, V. Garg, A. Chouksey, N. Pokhriyal, M. Jani, V. Chauhan, N. Thakur, V. S. Dogra, G. S. Rao, P. Chauhan, A. S. Kumar
|
https://doi.org/10.5194/isprs-archives-xliii-b3-2020-911-2020
| 2,020
|
CC-BY
|
isprs/31ec979d_8a39_4f4c_ab93_599d361f43ef.md
|
151
ON THE DETECTION OF GROSS AND SYSTEMATIC ERRORS IN COMBINED
ADJUSTMENT OF TERRESTRIAL AND PHOTOGRAMMETRIC DATA
[PERSON]
National Research Council of Canada
Ottawa, Ontario, Canada KIA OR6
Commission III
Abstract
A special bundle adjustment program which accepts terrestrial and
photogrammetric data has been developed with self-calibration capability and a
built-in gross-error detector with \"data snooping\". The program computes the
redundancy numbers as well as the external reliability factor for each
adjusted image point. Using actual and simulated data, in the form of
terrestrial observations between object points, the effect of additional
constraints on the ability of a photogrammetric system to detect gross and
systematic errors has been studied. In the combined adjustment, the detection
of gross errors was improved significantly, particularly in areas where the
intersection of rays is geometrically weak. The detection of systematic
errors did not improve, but their effect on the adjusted object coordinates
(external reliability) was greatly reduced.
Introduction
Simultaneous adjustment of terrestrial and photogrammetric observations has
been explored already for more than a decade (e.g., [PERSON] and [PERSON],
1971; [PERSON] et al, 1978; and [PERSON] and [PERSON], l981). The main purpose of
these applications has been to allow a reduction in the number of cotrol
points especially in areas where available geodetic observations are
insufficient for an adjustment of a complete geodetic network of control
points for phototriangulation. Instead of using the usually required number
of geodetically adjusted control points, therefore only available control
points plus some terrestrial observations, replacing the remaining control
points, are entered into a simultaneous adjustment with the photogrammetric
measurements.
Another benefit from the combined adjustment, discussed in the present paper,
is an improvement in the ability of the photogrammetric system to detect gross
and systematic errors. The terrestrial observations enforce certain
relationships between the ground coordinates. Points connected by such
observations have less freedom to move. Thus, if an error exists in an image
coordinate it will appear, depending on the type of terrestrial observation,
mainly in the image residual rather than in the ground coordinates, which
means a higher reliability for these points. An earlier study [[PERSON],
1981b] showed that distance observations between points of low reliability,
such as edge points, increase the reliability substantially (redundancy
numbers for x increased from zero to about 0.8) when adjusted simultaneously
with the photogrammetric data. Only two distances at each point are needed.
The study is here expanded to include two types of systematic error: radial
lens distortion and affine film deformation. Also included, in addition to
spatial distances between points, are observed height differences as
terrestrial data. The program GEBAT ([PERSON] and [PERSON] 1981), used in the
following tests, has been extended to compute parameters such as redundancy
numbers and external reliability factors. Three different types of data have
been employed: a simulated block with relatively dense network of points and
regular flight arrangement, a large-scale actual block, and a small close-
range convergent photography block. The bulk of the research has been
performed on the simulated block since it provides more flexibility and
unlimited variation in its parameters. The two actual blocks have only been 152
used to confirm some findings. In all these studies, the effect of different types of error on the image residuals and the adjusted object coordinates has been computed for the case where (a) only photogrammetric data were used and for the case when (b) the combined adjustment was applied. Before presenting the test results some theoretical investigations are presented.
Error Distribution - Theoretical Study
Errors in observations (vector L) will affect the adjusted unknowns (vector X) and the corrections to the observations, the residuals (vector V). The ratio by which the error affects each of these variables depends largely on the geometry of the system. This error distrribution can be computed by the variance covariance matrix of the adjusted observations and of the residuals.
After the adjustment, the weight-cofactor matrix of the observations can be computed by applying the covariance law on the function
L = F(X)
as follows:
Q = [\(\frac{\sigma\mathbb{F}}{\sigma\mathbb{X}}\)] QX[\(\frac{\sigma\mathbb{F}}{\sigma\mathbb{X}}\)]^T or
Q = A.N^-1.A^T
where N is the matrix of the normal equations. Partitioning the the unknowns into orientation parameters X1 and object coordinates X2, equation (3) becomes:
Q-L = [
153
It is of course important to reduce the effect of image error on adjusted object coordinates (e2) and increase the effect on image residual (r) so that it can be easily detected. This can be achieved by improving the geometry, or increasing the number of intersecting rays at object points. Table 1 gives the values of e2 and r (averaged for all non-control points) for points with different numbers of intersecting rays and for different blocks.
It is clear that improving the geometry, by increasing the number of intersecting rays, leads to the desired increase in r and decrease in e2(see also figure 1). In fact, the average of e2(x) and e2(y) is always: 1.5 n (9) where n is the number of intersecting rays. In any block, the average of e2(x) and e2(y) for all the points, each appearing n times in the block, always follows equation (9). This could be due to the fact that 1.5 points (3 observation) results in zero redundancy and the error appears entirely at adjusted coordinates (average e2(x) and e2(y) = 1.0).
The above analysis applies when no additional constraints or conditions exist between the object coordinates. Now, is it possible to increase the redundancy number r and decrease the factor e2 through added constraints rather than improving the geometric strength of intersecting rays? This is the objective of the next sections.
Effect of Additional Constraints on Gross-Error Detection
The constraints used in this test are spatial distances and height differences. These are probably the most useful terrestrial data for inclusion in a combined adjustment and also the easiest to acquire in practice. It is expected, as mentioned in the previous section, that the combined adjustment will increase the effect of the gross errors on the residuals while their effect on the adjusted object coordinates will decrease. This is demonstrated using combined adjustment with distances only and with distances and height differences together. The redundancy numbers are computed for different cases as shown in tables 2 and 4. An error of 100 um is introduced at each of these cases and the effect on the adjusted object coordinates is computed with and without terrestrial data (tables 2 and 4). Two blocks are used here, the simulated block and the close-range block. All the selected points, distances, and height differences were on the perimeter of the block (figures 2 and 3). This is of course the area where the geometric structure is the weakest, and thus improvement by additional constraints is most needed and more noticeable than anywhere else in the block.
Table 2 displays the changes in r1 for two different blocks and for different combinations of distances for points with different number of intersecting rays. When two or more measured distances originate from a point, the redundancy number increases to 0.50-0.9 range. One distance only does not improve the reliability (case D), also if the distance is in x direction, the increase in r1 is small (case B).
Table 3 shows the effect of an 100 um image error, for the cases of table 2, on the adjusted object coordinates, without and with distances. Except for case D (one distance only), the effect on adjusted object coordinates is reduced substantially when distances are used. In cases E to H, the object coordinates are almost unaffected by the error. In cases A and B, where the distances are in X-direction, the improvement is mainly in X, with moderate improvement in Y, and little or no improvement in Z. These two cases are
repeated in the next test where height differences and distances' are used in the combined adjustment. Table 4 shows the effect of the combined adjustment on the redundancy numbers. There is an additional improvement in r(x) (about 25%) and no change in r(y). However, the improvement in the effect on object coordinates is substantial especially when the error is in x coordinate (case A). In this case the object coordinates are almost unchanged due to the error. When the error is in y (case B), the resulting error in Z is almost eliminated while the errors in X and Y are reduced slightly.
It is now clear that the combined photogrammetric and terrestrial adjustment has a great advantage in improving the reliability both internal and external. All that is needed is the measurement of distances between points (two distances to each point) on the perimeter of the block where the reliability is originally the lowest. Height differences are not needed for cases where the ratio between variation in terrain elevations and camera station height is large enough to cause correlation between planimetric and height coordinates such as in close range photogrammetry. However, in cases of nearly flat terrain, height differences will help at least in improving the external reliability.
Effect of Additional Constraints on Some Systematic Errors
Since systematic errors are much smaller than most gross errors and affect all the points in the block, it is expected that the influence of the combined adjustment will be very different on the two types of error. In the case of systematic errors it will probably depend more on the source of error and the distribution of the terrestrial observations. Since many factors are needed to be studied here, only the simulated block is used in the following tests.
(a) Image Coordinates Contain Radial Lens Distortion:
A generated lens distortion data, using the Wild Aviogon lens distortion curve, have been added to the simulated image coordinates. The following parameters are studied:
a. type of terrestrial observation b. number and distribution of terrestrial observations c. number of control points
Various tests have been carried out with the results displayed in table 6 (tests 1-8). The different distance distributions are shown in figure 4. The height differences are at the perimeter of the block. Control point distributions are also shown in that figure. Analyzing these tests, the following comments can be made:
1. The overall effect on the residuals is negligible. The standard error of unit weight has not changed while the residuals at individual points have changed slightly up or down.
2. When no terrestrial data have existed, the control point distribution is critical (compare object coordinate error in tests 1 and 2) while additional control points do not improve the results significantly in the case of combined adjustment (compare cases 3 and 4). Comparing test 1, where 20 planimetric and 34 vertical control points have been employed without additional constraints, with test 8, where 8 planimetric and 14 vertical control points have been used with terrestrial observations, it is clear that the terrestrial data not only replace many of the control points but also improve the accuracy.
3. The optimum distance distribution is 28 perimeter distances (test 6). These distances do not form a closed polygon around the block like in test 3 but have few gaps which have not affected the accuracy but, on the other 155
hand, have reduced the measurement effort. Using 60 distances as shown in figure 4 does not change the results.
4. The accuracy in Z does not change significantly until height differences are introduced (test 8). This is probably because the elevation differences compared to the flying height is small (nearly flat terrain).
Table 6 shows the overall accuracy of the different tests, and it may be useful to look what happens at the individual object points. The points included in table 7 and shown in figure 5 are selected as an example of points with constraints in the block. By examining table 7 and figure 5 comparing test 2 and 3, it is obvious that the error along the distance direction has been removed. For points 68 and 82 the distances are in Y-direction while for points 138, 149 and 165 they are in X-direction. The improvement in the perpendicular direction or in Z-direction is smaller. When height differences are added to the adjustment, the error in Z has almost disappeared. Some increase in the errors has taken place in the perpendicular direction but it is too small to be corrected by the distances.
(b) Image Coordinates Contain Affine Film Deformation
The affine film deformation, introduced into the image coordinates of the simulated block, produces a very different error pattern in both the residuals and the object coordinates (table 6, tests 9 to 14) from that produced by radial lens distortion. The additional constraints have not improved the results at all. The main reason is that this type of systematic error does not produce significant errors along the coordinate axis that is nearly parallel to the distance directions or in Z. Most of the errors in the object coordinates are in the perpendicular direction where distances have little effect for this size of error. This is clear from table 8, where most of the error in points 68 or 82 is in X (distances are in Y direction, see figure 6) and in Y-direction for points 138, 149 and 165 (distances are in X- direction).
The overall size of image residuals is very small (less than 1 um), and the additional constraints have little effect on them.
Concluding Remarks
The effectiveness of the combined adjustment as a tool for error detection depends on the following two factors:
1. Error size. Large errors are very effectively detected by the combined adjustment. Points with originally low or no reliability could have a 0.7 or more redundancy number when two or more distances are measured to these points. Systematic errors, due to their small size, could not be detected any better, by the residual, using the combined adjustment. However, the effect on the adjusted object coordinates (external reliability) has, in most cases, been reduced significantly, and thus the overall accuracy of the adjusted coordinates has increased.
2. Error direction. As a rule, terrestrial observations are very effective in eliminating the effect of image errors on the adjusted coordinates in the direction of the observations. If the observations are distances in X-direction, for example, then about 90% of the error in this direction is eliminated compared to only 10-35% in the Y-coordinate. The use of height differences eliminates virtually all errors in Z.
Although more detailed studies are still needed, [using other types of terrestrial observations at more different configurations] it is safe to say that by having such observations in the areas where the intersection of rays is geometrically weak, we can improve significantly the detection of gross errors and the external reliability of blocks containing systematic errors.
References
[PERSON] (1981a), \"An Evaluation of the Different Criteria to Express Photogrammetric Accuracy\", Proceedings of ASP fall Technical meeting, San Francisco, Sept. 9-11, pp. 292-298.
[PERSON] (1981b), \"A Practical Study of Gross-Error Detection in Bundle Adjustment\", The Canadian Surveyor, Vol. 35, December, pp. 373-386.
[PERSON] and [PERSON] (1981), \"A Combined Adjustment of Geodetic and Photogrammetric Observations\", Photogrammetric Enginnering and Remote Sensing, Vol. 47, No. 1, January, pp. 93-99.
[PERSON], et al. (1978), \"Bridging with Independent Horizontal Control\", Photogrammetric Engineering and Remote Sensing, Vol. 44, No. 6, June, pp. 668-695.
[PERSON] and [PERSON] (1971), \"Aerotriangulation by SAPGO\", Photogrammetric Engineering, Vol. 38, No. 8.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Case & No. of & No. of & No. of & original & r\({}_{\text{1}}\) (distance & r\({}_{\text{1}}\) \\ & Rays & distances & height diff. & ri & only) & & \\ \hline A & 3 & 2 & 2 & 0.18(x) & 0.53 & 0.66 \\ B & 3 & 2 & 2 & 0.43(y) & 0.50 & 0.50 \\ \hline \end{tabular}
\end{table}
Table 4: Effect of Distances and Height Differences on Redundancy Numbers (Simulated Block)
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline CASE & PHOTO ONLY & \multicolumn{3}{c|}{WITH DISTANCES} & \multicolumn{3}{c|}{WITH DISTANCES} \\ & & & & & & & & & \\ & & & & & & & & & \\ & & & & & & & & & \\ & & & & & & & & & \\ \hline & X & Y & Z & X & Y & Z & X & Y & Z \\ \hline A & 162 & 170 & 296 & 14 & 111 & 256 & 1 & 1 & 2 \\ B & 92 & 134 & 41 & -12 & 116 & 49 & -10 & 95 & 2 \\ \hline \end{tabular}
\end{table}
Table 5: Effect of 100 um image error on adjusted object coordinates (mm)(simulated block)
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline POINT \# & TEST & \#2 & \multicolumn{4}{c|}{TEST \#3} & \multicolumn{4}{c|}{TEST \#8} \\ \hline & X & Y & Z & X & Y & Z & X & Y & Z \\ \hline
68* & 31 & 34 & 0 & 10 & 1 & 0 & 7 & 1 & 4 \\
82* & 48 & 21 & 0
Figure 1: Average Values of r and e\({}_{2}\)
Figure 2: Some Erroneous Points. Simulated Block
Figure 3: Some Erroneous Points..Close-Range Block
Figure 4: Distribution of Distances and Control Points in Simulated Block.
Figure 5: Improvement Component Along Distance Direction..(Radial Lens Distortion)
Figure 6: Improvement Component Along Distance Direction..(Affine Film Deformation)
|
isprs
|
Introduction
|
Elena Faur, Ciprian Speranza
|
https://doi.org/10.55245/energeia.2025.01
| 2,025
|
CC-BY
|
isprs/d7425994_c145_4a3b_a81f_962a087cf698.md
|
# GLCM features for learning flooded vegetation from Sentinel-1 and Sentinel-2 [PERSON]
[PERSON]
[PERSON] 2
Footnote 1: Corresponding author
Footnote 2: footnotemark:
Footnote 3: footnotemark:
Footnote 4: footnotemark:
###### Abstract
Efforts on flood mapping from active and passive satellite Earth Observation sensors increased in the last decade especially due to the availability of free datasets from European Space Agency's Sentinel-1 and Sentinel-2 platforms. Regular data acquisition scheme also allows observing areas prone to natural hazards with a small temporal interval (within a week). Thus, before and after datasets are often available for detecting surface changes caused by flooding. This study investigates the contribution of textural variables to the predictive performance of a data-driven machine learning algorithm for detecting the effects of a flooding caused by Sardoba Dam break in Uzbekistan. In addition to the spectral channels of Sentinel-2 and polarization bands of Sentinel-1, two spectral indices (normalized difference vegetation index and modified normalized difference water index), and textural features of gray-level co-occurrence matrix (GLCM) were used with the Random Forest. Due to high dimensionality of input variables, principal component (PC) analysis was applied to the GLCM features and only the most significant PCs were used for modeling. The feature stacks used for learning were derived from both pre- and post-event Sentinel-1 and Sentinel-2 images. The models were validated through model test measures and external reference data obtained from PlanetScope imagery. The results show that the GLCM features improve the classification of flooded areas (from 82% to 93%) and flooded vegetation (from 17% to 78%) expressed in user's accuracy. As an outcome of the study, the use of textural features is recommended for accurate mapping of flooded areas and flooded vegetation.
F 1
Footnote 1: Corresponding author
Footnote 2: Hacettepe University, Department of Geomatics Engineering, 06800 Beytepe Ankara, Turkey - [EMAIL_ADDRESS]
## 1 Introduction
Flood events, the frequency and severity of which are increasing because of urbanization and population growth, cause devastating effects on society, economy and ecosystems worldwide (EMDAT, 2022). It is essential to produce reliable spatial and temporal information on the extent of the flood in order to mitigate their impacts and to plan the disaster management, emergency response and insurance processes effectively. In this context, the potential of widely used Earth Observation (EO) datasets and various mapping approaches in identifying smooth open water bodies and flooded areas has been proven in many studies (e.g., see [PERSON] et al., 2018, 2019, 2020, 2022; [PERSON] et al., 2022; [PERSON] et al., 2021).
In most flooding hazard events in rural areas, inundated vegetation accounts for more than three-quarters of the total flooded area. In this context, many approaches that rely on backscattering intensity have been used in the literature for the determination of inundated vegetation. [PERSON] et al. (2014) evaluated multitemporal TernSAR-X horizontal-horizontal (HH) and vertical-vertical (VV) polarizations and polarimetric parameter, the Shannon entropy (SE), by using support vector machine (SVM), K-nearest neighbors and decision tree (DT) algorithms to identify vegetation types under the flood event. In the study, the classification results reached the highest kappa index of 0.85 and the contribution of polarimetric parameters to the results was compared with the HH, VV or combined (HH and VV) backscatter parameters. Accordingly, it was emphasized that the use of polarimetric parameters contributed to the determination of inundated vegetation. [PERSON] et al. (2016) evaluated the potential of Sentinel-1 VV and HV polarizations by using a backscatter thresholding algorithm to detect open water, flooded vegetation and non-flooded grassland. In the study, open water was successfully detected, but flooded grasslands were detected with a poor accuracy level because of the fine-grained grass/crops patterns.
In approaches based on backscatter analysis, polarimetric synthetic aperture radar (PoSAR) and interferometric SAR (InSAR) coherence are preferred to minimize the confusion of inundated vegetation with urban areas and shadow areas with open water ([PERSON] et al., 2013; [PERSON] et al., 2014; [PERSON] et al., 2015; [PERSON] et al., 2017; [PERSON] et al., 2022). [PERSON] et al. (2017) presented a procedure specifically focusing on the identification of inundated vegetation based on C-band Sentinel-1 and L-band ALOS-2/PALSAR-2 data. As a result of the proposed procedure involving polarimetric decomposition, it was emphasized that the C band data (Sentinel-1) is suitable for smooth water detection, while L band (ALOS-2/PALSAR-2) data provides detailed information about flooded vegetation. [PERSON] et al. (2019), analyzed the RADARSAT HH and HV polarizations to map flooded vegetation, noting that the polarizations were less effective for this purpose due to increase of backscatter intensity and phase shift from double bounce scattering. The polarization ratio (HH/HV), Shannon entropy and m-chi decomposition provided a good discrimination between flooded vegetation and the other class.
In summary, while decomposition methods such as Sinclair, Freeman-Durden, Yamaguchi, H-\(\alpha\) Alpha, etc. provide higherdiscrimination ability on backscattering characteristics of objects, coherency data evaluated based on seasonality allows determination of flooded vegetation. However, there are limitations to the application of these methods in terms of the availability of full polarimetric data, area coverage, temporal and geometric resolution ([PERSON] et al., 2019).
In recent years, the studies in the literature aiming to determine the flooded vegetation have focused on the use of the complementary potential of SAR and optical data together. In this context, Sentinel-1 SAR and Sentinel-2 optical datasets are preferred due to their temporal and spatial resolutions. Studies using only Sentinel-2 ([PERSON] et al., 2018) and jointly using Sentinel-1&2 ([PERSON] et al., 2017; [PERSON] et al., 2023) have demonstrated benefits of both datasets for flood mapping. In addition, there are many studies investigating the potential of multi-temporal Sentinel-1 ([PERSON] et al., 2017; [PERSON] and [PERSON], 2018; [PERSON] et al., 2018) and Sentinel-2 ([PERSON] et al., 2019) datasets.
A recent study by [PERSON] et al. (2022) evaluated accurate flood mapping potential of Sentinel-1 and Sentinel-2 datasets comprehensively by comparing the different data availability scenarios, such as only pre- event Sentinel-2 together with pre- and post-event Sentinel-1, or the use of only Sentinel-1, etc. The study area was in Sirdory region of Uzbekistan, in which a dam break occurred on May 1* 2020 and caused flooding over a large region known for this high agricultural activity. The results obtained from the random forest (RF) classification revealed that the highest accuracy could be obtained by using both the pre- and post-event Sentinel 1&2 data and a set of hand-crafted feature set, such as spectral indices and textural variables.
This study aimed at providing an in-depth analysis on the contribution of textural features for classification accuracy explicitly. For this purpose, we produced gray-level co-occurrence matrix (GLCM) textural features and assessed their capability for learning inundated vegetation from Sentinel-1 and Sentinel-2 imagery. In this context, a multi-temporal feature space was created by generating pre- and post-event GLCM textures and various spectral indices. The feature spaces with and without GLCM variables were than used for modeling with the RF classifier. The results were validated using information obtained from the PlanetScope orthoimage with 3 m spatial resolution. The data, methods and results are presented and discussed here.
## 2 Materials and Methods
Under the following sub-headings, the study area, the data and the methodology are described in detail.
### Study Area and Datasets
The Sardoba Dam was built between 2010-2017 on the Syr Darya River in Uzbekistan. Completed in 2017, the dam reservoir was designed to hold more than ~922 million m\({}^{3}\) of water to irrigate the fertile farmland around the region, where crops such as cotton and wheat are usually produced ([PERSON], 2020). On May 1, 2020, the region was flooded due to a break on the wall of the dam. The flood waters advanced into the borders of Kazakhstan and caused destruction in a wide area consisting of settlements and fertile crop lands.
The location of the study area and the corresponding land use land cover (LULC) map are illustrated in Figure 1. The study area spans around 2009 km\({}^{2}\) and, as per the ESA WorldCover product, comprises 69.8% cropland, 16.6% bare/sparse vegetation, 6.2% urban area, 4.9% grassland, and 2.1% permanent water bodies (ESA-WorldCover, 2020).
In this study, we utilized Sentinel-1 and Sentinel-2 datasets provided by the ESA Copernicus Programme (Copernicus, 2020). Table 1 summarizes the data properties and the ground conditions at the time of the data acquisition, such as pre- or post-event. The selected Sentinel-1 and Sentinel-2 data accurately represented the pre-and post-food conditions. Figure 2 displays the pre-and post-flood Sentinel-2 RGB images acquired on April 24 and May 04, 2020, respectively, together with Sentinel-1 VV polarization images. Additionally, validation datasets were generated using PlanetScope orthoimages with 3 m spatial resolution. For this purpose, we manually delineated reference polygons of each class from PlanetScope data acquired on May 10, 2020, through the Planet Explorer platform (www.planet.com).
\begin{table}
\begin{tabular}{|p{56.9 pt}|p{56.9 pt}|p{56.9 pt}|p{56.9 pt}|} \cline{2-4} \multicolumn{1}{c|}{} & \multicolumn{1}{p{56.9 pt}|}{**Acquisiton Date**} & \multicolumn{1}{p{56.9 pt}|}{**Condition**} & \multicolumn{1}{p{56.9 pt}|}{**Usage**} \\ \hline
**Sentinel-1** & 2020 & 04/29 & Pre-event & Feature generation \& \\ & 2020 & 05/25 & Post-event & \& Classification \\ \hline
**Sentinel-2** & 2020 & 04/24 & Pre-event & Feature generation \& \\ & 2020 & 05/04 & Post-event & \& Classification \\ \hline
**PlanetScope orthoimage** & 10/05/2020 & Post-event & External reference for Validation \\ \hline \end{tabular}
\end{table}
Table 1: Basic specifications of the datasets used in the study.
Figure 1: The study site location (above) and the LULC map obtained from the ESA WorldCover (below).
### Methodology
The methodology of this study consists of three basic stages as _(i)_ data pre-processing and feature extraction, _(ii)_ feature selection, and _(iii)_ modeling, mapping and validation, as depicted in Figure 3. In the first stage of data pre-processing and feature extraction, several methods such as noise filter and removal of systematic errors caused by terrain were applied to the Sentinel-1 data. The lower resolution band of Sentinel-2 (B11) was upsampled as well. The normalized difference vegetation index (NDVI) and modified normalized difference water index (MNDWI) spectral indices were produced from the pre- and post-event Sentinel-2 data. In addition, a total of 10 GLCM texture variables introduced by [PERSON] et al. (1973) were applied to each of the pre-event and post-event Sentinel-1 and Sentinel-2 band data.
In the second stage, Principle Component Analysis (PCA) was applied to the GLCM variables produced for the pre- and post-event Sentinel-1 and Sentinel-2 data in order to reduce the dimensionality as there were a total of 140 of them. A total of 12 GLCM principle components (GLCM PCs) obtained from the analysis were used as additional information to the original Sentinel-1 (S1) and Sentinel-2 (S2) bands. This dataset (Stack-1) involving the GLCM PCs produced in the study by [PERSON] et al. (2022) was used for the modelling with the RF classifier, and the results were validated with a test dataset (546.052 reference samples) produced from an external reference (PlanetScope orthoimages). The bands included in Stack-1 are pre- and post-event S1 VV, S1 VH, S1 GLCM PCs, 5 bands of S2 (B2, B3, B4, B8, B11), S2 GLCM PCs, NDVI and MNDWI.
In order to assess the contribution of the GLCM feature components (GLCM PCs) to the prediction of flooded areas and flooded vegetation, these features were removed from the Stack-1 dataset to obtain Stack-2, which consists of pre- and post-event S1 VV, S1 VH, 5 bands of S2 (B2, B3, B4, B8, B11), NDVI and MNDWI. The training data was manually delineated on the S2 RGB imagery for the Stack-1, and was also used for the learning process with Stack-2.
In the third stage, the RF method proposed by [PERSON] (2001), which is based on decision trees, was used for learning from data formed in the previous stage. A total of seven LULC classes, namely flooded vegetation (FV), flooded area (FL), bare land (BL), permanent water (PW), urban area (Ur), vegetation 1 (V1) and vegetation 2 (V2), were identified from the post-event Sentinel-2 images. For this, a total of 13.539 training samples manually delineated from post-event Sentinel-2 were used with a tree size of 300 and 3-fold cross-validation. Previous studies carried out by [PERSON] et al. (2018, 2020, 2022) have shown that instead of applying a binary classification approach for flooded areas, applying a holistic LULC classification increases the accuracy and reliability of flood extent maps. Thus, the seven classes mentioned above were defined in the modeling with Stack-1 and Stack-2. The results were tested using pixels inside the test polygons identified on the external reference imagery of PlanetScope for both feature stacks.
## 3 Results and Discussions
Figure 4 (a and b) shows the classification results for the seven classes as explained in the previous section. The distribution of the training polygons can also be seen in the figure. The flood maps obtained from the Stack-1 and Stack-2 exhibit differences especially in the FL, Ur and V1 classes, and even more in the FV class based on the visual inspection. No significant change was observed in the PW and V2 classes. Detailed views from the maps focusing rather on the FL and FV classes are given in
Figure 3: Overall methodology of the study.
Figure 2: Satellite images of the study site obtained from Sentinel-2 and the VV polarization data of Sentinel-1.
Figure 5 together with the test polygons delineated on the PlanetScope orthomimages. The extent of the maps shown in Figure 5 is illustrated with dashed black rectangle in Figure 4. Tables 2 and 3 represent the validation results as confusion matrices between the different classes for Stack-1 and Stack-2 predictions, respectively. In the tables, the rows and the columns represent the numbers of actual (PlanetScope) and the predicted class samples, respectively. As can be seen in the tables, 86% overall accuracy (OA) and 83% Kappa (K) values were obtained as a result of Stack-1 classification, while 68% OA and 60% K values were obtained as a result of Stack-2 classification. In addition, Table 4 provides further accuracy measures comparatively for both stacks, such as the producer's accuracy (PA) or omission error, and user's accuracy (UA) or commission error.
As can be seen from the Tables 2 and 3, with the use of GLCM, the OA value increased from 68% to 86%, and the K value increased from 60% to 83%. According to Table 4, with the use of GLCM, the PA of the FL class increased from 42% to 79%, while the UA increased from 82% to 93%. Based on the bold marked values in Table 4, it can be concluded that the utilization of GLCM has a great impact on the learning of the FL, FV, Ur, and V1 classes in this study.
As seen in both the classification results and the error matrices, the use of GLCM data significantly contributes to increasing the classification accuracy by preventing the mixture between the FV and FL classes. When GLCM is not used, more than 3/4 of the FV class was mislabelled as FL, whereas no mixing occurs between the FV and the other classes. In other words, the missing textural variables resulted in the misclassification of flood areas as urban and vegetation. It is evident that the inundated vegetation shows a different scattering mechanism in radar data compared to other classes. This particularly finding highlights the difference in texture properties of the inundated vegetation compared to the floods, water, or the other agricultural areas in the region.
Although the use of GLCM PCs has increased the PA of the urban areas from 60% to 77%, the UA has decreased from 89% to 78% since Ur class pixels were labeled as FL. Therefore, it can be said that the texture information causes complexity during the learning of Ur and FL classes. Also, comparison of both sets of results showed that the PA of the PW class which classified with GLCM has increased from 79% to 85%. The PA values of the PW class were 79% and 85% in Stack-1 and Stack-2, respectively. Accordingly, the texture information caused more mixing between PW and FL classes. This situation is likely to be due to the mixing of the texture features that may occur on smooth water surfaces due to wave and/or wind effects with the PW class.
## 4 Conclusions and Future Work
In the present study, the contribution of GLCM textural features for flood extent mapping including flooded vegetation were evaluated with the RF classifier applied to the learning set obtained from various Sentinel-1 polarization and Sentinel-2 spectral bands. The study area was located in the Uzbekistan-Syrdarya region, has been affected by dam flooding, and comprises 70% cropland. Besides several other factors, the site was selected as cloud-free Sentinel-2 images were available representing the post-event status and the topography is rather flat, thus the radar geometric distortions such as shadow can be neglected. Two sets of learning variables, one containing GLCM textural information in the form of principal components and the other one without GLCM textures were produced. A LULC classification for a total of seven classes were followed here. The results were assessed using external reference obtained from PlanetScope orthoimages with 3 m spatial resolution.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline
**PA (\%)** & **BL** & **FL** & **FV** & **PW** & **Ur** & **V1** & **V2** \\
**Stack1 / Stack2** & 85/94 & **79/42** & 96/99 & 79/85 & **77/60** & 94/92 & 82/85 \\
**UA (\%)** & & & & & & & \\
**Stack1 / Stack2** & 44/40 & **93/82** & **78/17** & 100/99 & 78/89 & **94/86** & 83/83 \\ \hline \end{tabular}
\end{table}
Table 4: PA and UA accuracy metrics obtained from the RF classifications with Stack-1 and Stack-2.
Figure 5: Detailed views from the FV and FL polygons as a part of reference data in (a) Sentinel-2 post-event RGB image, (b) flood map with GLCM features (Stack-1), (c) flood map without GLCM features (Stack-2).
The results showed the use of GLCM PCs greatly contributed to increase the overall classification accuracy (OA= 86% with GLCM and OA=68% without GLCM) based on external reference. The UA of flooded vegetation class exhibited the highest improvement, from 17% to 78% without and with GLCM, respectively. The classification accuracy of the flooded areas also increased yielding an increase from 42% to 79% in terms of PA. Thus, the use of textural features is highly recommended for detecting both the flooded areas and the flooded vegetation.
On the other hand, the use of texture data has led to the misclassification of surfaces without texture, such as open water surfaces. Further strategies can be integrated to reduce this effect as future work.
## Acknowledgements
This study is part of the Ph.D. thesis research of [PERSON].
## References
* [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] (2014)Multitemporal classification of TerraSAR-X data for wetland vegetation mapping. Journal of applied remote sensing8 (1), pp. 083648-083648. External Links: Document, ISSN 1573-0108 Cited by: SS1.
* [PERSON], [PERSON], [PERSON], and [PERSON] (2022)Deep learning methods for flood mapping: a review of existing applications and future research directions. Hydrology and Earth System Sciences26 (16), pp. 4345-4378. External Links: Document, ISSN 0036-8036 Cited by: SS1.
* [PERSON]. [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] (2018)Monitoring environmental supporting conditions of a raised bog using remote sensing techniques. Proc. IAHS 380, pp. 9-15. External Links: Document, ISSN 1075-1211 Cited by: SS1.
* [PERSON] (2001)Random forests. Machine Learning45 (1), pp. 5-32. External Links: ISSN 0036-8036 Cited by: SS1.
* [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] (2013)Compact polarimetry assessment for rice and wetland mapping. International Journal of Remote Sensing34 (6), pp. 1949-1964. External Links: Document, ISSN 1075-1211 Cited by: SS1.
* [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] (2019)Evaluation of C-band SAR for Identification of Flooded Vegetation in Emergency Response Products. Canadian Journal of Remote Sensing45 (1), pp. 73-87. External Links: Document, ISSN 1075-1211 Cited by: SS1.
* [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] (2016)Mapping and Characterization of Hydrological Dynamics in a Coastal Marsh Using High Temporal Resolution Sentinel-1A Images. Remote Sensing8 (5), pp. 5705-5708. External Links: Document, ISSN 1075-1211 Cited by: SS1.
* [PERSON]. [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] (2021)A repeatable change detection approach to map extreme storm-related damages caused by intense surface runoff based on optical and SAR remote sensing: evidence from three case studies in the South of France. ISPRS Journal of Photogrammetry and Remote Sensing182 (1), pp. 153-175. External Links: Document, ISSN 1075-1211 Cited by: SS1.
* [PERSON], [PERSON], and [PERSON] (2017)Coprobital Sentinel 1 and 2 for LULC mapping with emphasis on wetlands in a mediterranean setting based on machine learning. Remote Sensing9 (1), pp. 1259. External Links: Document, ISSN 1075-1211 Cited by: SS1.
* [PERSON], [PERSON], [PERSON], and [PERSON] (2015)Change Detection with Compact Polarimetric SAR for Monitoring Wetlands. Canadian Journal of Remote Sensing41 (4), pp. 408-417. External Links: Document, ISSN 1075-1211 Cited by: SS1.
* [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] (2014)Detecting Emergence, Growth, and Senescence of wetland Vegetation with Polarimetric Synthetic Aperture Radar (SAR) Data. Water6 (6), pp. 694-722. External Links: Document, ISSN 1075-1211 Cited by: SS1.
* [PERSON], and [PERSON] (1973)Textural features for image classification. IEEE Transactions on systems, man, and cybernetics6 (6), pp. 610-621. External Links: Document, ISSN 1075-1211 Cited by: SS1.
* [PERSON]. [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] (2017)Automated extraction of inland surface water extent from sentinel-1 data. International Geoscience and Remote Sensing Symposium (IGARSS)2259-2262. External Links: Document, ISSN 1075-1211 Cited by: SS1.
* [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] (2019)A highly automated algorithm for wetland detection using multi-temporal optical satellite data. Remote Sensing of Environment224 (333-351), pp.. External Links: Document, ISSN 1075-1211 Cited by: SS1.
* [PERSON] and [PERSON] (2018)Wetland mapping using SAR data from the Sentinel-1A and TanDEM-X missions: a comparative study in the Biebraz Floodplain (Poland). Remote Sensing10 (7), pp. 78. External Links: Document, ISSN 1075-1211 Cited by: SS1.
* [PERSON] and [PERSON] (2023)Multi-temporal Sentinel-1 SAR and Sentinel-2 MSI Data for Flood Mapping and Damage Assessment in Mozambique. ISPRS International Journal of Geo-Information122 (5), pp. 53. External Links: Document, ISSN 1075-1211 Cited by: SS1.
* [PERSON], [PERSON], [PERSON], and [PERSON] (2017)Combining polarimetric sentinel-1 and ALOS-2/PALSAR-2 imagery for mapping of flooded vegetation. In 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), pp. 5705-5708. External Links: Document, ISSN 1075-1211 Cited by: SS1.
* [PERSON] (2020)Uzbekistan dam collapse was a disaster waiting to happen. External Links: Link Cited by: SS1.
* [PERSON] (2020)Uzbekistan dam collapse was a disaster waiting to happen. External Links: Link Cited by: SS1.
* [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON]
Spatial Inf. Sci., 42(5), 575-581. [[https://doi.org/10.5194/isprsr-archives-XLI-5-575-2018](https://doi.org/10.5194/isprsr-archives-XLI-5-575-2018)]([https://doi.org/10.5194/isprsr-archives-XLI-5-575-2018](https://doi.org/10.5194/isprsr-archives-XLI-5-575-2018)).
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], H., [PERSON], C. 2019. Flood Mapping Using Sentinel-1 SAR Data: A Case Study of Ordu 8 August 2018 Flood. International Journal of Environment and Geoinformatics, 6(3), 333-337, 2019. [[https://doi.org/10.30897/ijegeo.666212](https://doi.org/10.30897/ijegeo.666212)]([https://doi.org/10.30897/ijegeo.666212](https://doi.org/10.30897/ijegeo.666212))
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], C. 2020. A Fusion Approach for Flood Mapping Using Sentinel-1 and Sentinel-2 Datasets. ISPRS Virtual Congress 2020. [[https://doi.org/10.5194/isprs-archives-XLI-B3-2020-641-2020](https://doi.org/10.5194/isprs-archives-XLI-B3-2020-641-2020)]([https://doi.org/10.5194/isprs-archives-XLI-B3-2020-641-2020](https://doi.org/10.5194/isprs-archives-XLI-B3-2020-641-2020))
* [PERSON] et al. (2022) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] 2022. Flood damage assessment with Sentinel-1 and Sentinel-2 data after Sardoba dam break with GLCM features and Random Forest method. Science of the Total Environment, 151585. doi.org/10.1016/j.scitotenv.2021.151585
The International Disaster-Emergency Events Database (EMDAT). Disasters Year in Review 2022; Available online: [[https://www.emdat.be/publications](https://www.emdat.be/publications)]([https://www.emdat.be/publications](https://www.emdat.be/publications)) (accessed on 12 April 2022).
[PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] 2018. Detection of temporary flooded vegetation using Sentinel-1 time series data. Remote Sensing. 10, 1286. [[https://doi.org/10.3390/rs10081286](https://doi.org/10.3390/rs10081286)]([https://doi.org/10.3390/rs10081286](https://doi.org/10.3390/rs10081286)).
[PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] 2019. Flood monitoring in vegetated areas using multitemporal Sentinel-1 data: Impact of time series features. Water, 11(9), 1938. [[https://doi.org/10.3390/w11091938](https://doi.org/10.3390/w11091938)]([https://doi.org/10.3390/w11091938](https://doi.org/10.3390/w11091938))
|
isprs
|
GLCM FEATURES FOR LEARNING FLOODED VEGETATION FROM SENTINEL-1 AND SENTINEL-2 DATA
|
B. Tavus, S. Kocaman
|
https://doi.org/10.5194/isprs-archives-xlviii-m-1-2023-601-2023
| 2,023
|
CC-BY
|
isprs/3931b8ed_0420_43f3_98f3_dbb43a312f8c.md
|
# Extracting Precise and Affordable Dems Despite of the Clouds.
Ajax: The Joining of Radar and Optical Strengths
[PERSON]
1 IGN Espace, 6 avenue de l'Europe, 31520 Ramonville Saint Agne, France. [PERSON]S]
[PERSON]
2 ASTRIUM ASV GEO, 2600 Route des Cretes, 06905 Sophia Antipolis Cedex, France. [PERSON]]
[PERSON]
3 ASTRIUM ASV GEO, 5 ne des Satellites, BP14359, 31030 Toulouse, France. [PERSON]SS]
[PERSON]
4 IGN Espace, 6 avenue de l'Europe, 31520 Ramonville Saint Agne, France. [PERSON]S]
ASTRIUM ASV GEO, 2600 Route des Cretes, 06905 Sophia Antipolis Cedex, France. [PERSON].
agricultural areas, for radar. Scientific literature offers plenty of brilliant papers about principles and methods to extract DEMs from radar and optical pairs
### DEM extraction from Optical data
DEM extraction commonly uses 2 optical satellite images, ie a stereopair, from more or less symmetric incidence angles, and can strongly benefit of the addition of a 3\({}^{d}\) image, preferably under a near-vertical incidence, to better render the steep areas and deep valleys (\"tristereo mode\").
As one knows, the clouds do hinder image collection by optical sensor like SPOT5/HRS. This task requires a careful monitoring and patient re-tasking over reluctant (ie cloudy) areas. And sometimes patience itself is not enough: after 11 years of continuous attempts (2002-2012), some areas of the Equatorial belt remain not feasible, from a DEM-extraction point of view, due to persistent cloud cover. The example of French Guyana (84,000 kmr\({}^{2}\)) is self-explaining: since the launch of SPOT 5 in May 2002, more than 2,360 HRS stereopairs have been collected, to achieve only 42% of a cloud-free coverage, though every place in F. Guyana has been imaged (obviously mainly under clouds) more than 202 times since 2002!
### DEM extraction from Radar data
Meanwhile, radargrammetry needs 4 images over most places, ie one ascending pair plus one descending pair. Only in some cases (over gently hilly areas), one radar pair could be sufficient to achieve a good DEM; however as it remains difficult to predict exactly where this will occur, the collection of radargrammetric TerraSAR-X images systematically plans 2 pairs.
As one knows, the clouds do not hinder the collection of TerraSAR-X images. This opens the way to the collaborative extraction of DEM between optical and radar data: the AJAX project.
## 3 AJAX Concept and Requirements
### AJAX concept and goals
AJAX did not aim at re-exploring DEM extraction methodologies from optical and radar -grammetric data, fairly well-known, but rather to experiment the joint use of both optic and radar pairs **to provide a single consistent, accurate and affordable DTED2 DEM** to complement the Elevation30 product line over Equatorial areas.
Two test 1\"x1\" geocels were chosen to demonstrate the potentialities of this blending: one over Colombia (N07W074) and one over Congo (N02E018). Over these two geocells, the cloud-free HRS (ie optical) coverage was below 50%, which of course made impossible the extraction of any reliable DEM (within our production flow, a 98% ratio is considered as a minimum).
This paper will focus on the Colombian prototype, as the results over Congo are very similar.
### AJAX accuracy requirements
Being bound to be integrated into the Elevation30 Product range, the AJAX prototype should meet the following accuracy requirements:
* 10m LE90 for slopes lower than 20 %
* 18m LE90 for slopes between 20 and 40 %
* 30m LE90 for slopes greater than 40 %
Since 2002, numerous accuracy assessments of the Elevation30 products from HRS (also known as Reference3D) have been performed at international level by independent users: NGA, European Commission/IRC, ImageONE (Japan) and many others. All concluded that the product fully met its specifications. [[PERSON], [PERSON] et al., 2004] [Yoshino et al., 2008] [[PERSON] et al., 2010]
## 4 Production Steps
### Colombian prototype area - Input data
The Western part of the 110 km x 110 km geocell consists in a rather flat plain divided by a large river gently flowing Northwards (see Figure 2). A mountain range with very steep slopes occupies the Eastern part of the geocell. Elevations span from 50m to 3,150m above sea level.
The following Figures 1 and 2 show the input data that were used to produce a combined DEM.
Figure 1 - SPOT 5 HRS coverage over N07W074 Blue and white colours show cloudy areas
Figure 2 - TerraSAR-X coverage over N07W074 For the purpose of this test, the full geocell was collected by TerraSAR-X, notwithstanding the existing HRS coverage.
### Initial production steps
The first step of the production consists in extracting as much as possible elevation data from optical HRS archive, built over years, and then to determine which areas should be covered by TerraSAR-X data (to be acquired on purpose). This includes the following:
* HRS pairs integration in the continental space-triangulation
* Raw HRS DEM computation
* HRS DEM merging and mosaicking
* Void Mask extraction \(\Rightarrow\) determination of TSX area
* Water Mask delineation
Then the missing parts are collected by TerraSAR-X, and the following tasks take place:
* Ascending and descending pairs acquisition on TSX area
* Raw TSX DEM computing (radargrammetry matching)
* Raw TSX DEM merging and mosaicking
* Water Mask delineation (on areas not already covered by HRS Water mask)
These steps were performed in their \"standard way\" by the staff in charge of DEM production both in Germany for TerraSAR-X, and in Toulouse (France) for SPOT 5 HRS.
### DEM merging
After this began a more specific processing chain, directly linked with AJAX:
* Merging of DEMs from HRS and TSX
* Automatic detection of voids and artefacts
* Patching of remaining voids with SRTM) DEM from the Internet (resampled to 1 arc.second)
* Edition Phase
* Water flattening on merged DEM, using the Water Masks previously delineated
* DLD (Double-Line Drains) processing: the rivers are made flowing smoothly downstream
* Final DEM quality control, visual detection of remaining artefacts \(\Rightarrow\) digitisation of uncertified areas ; registering into the corresponding mask
At the end of the day, the resulting AJAX DEM originates in balanced parts from TerraSAR-X and SPOT 5, as shown in Figure 3 below:
## 5 Quality control and validation
Since the beginning of the production in 2002, Quality Control and Validation steps represent a significant part of the Elevation30 production flow (15 to 20% as an average). The merging of radar and optical DEMs of course introduces the necessity of a dedicated V&V process.
### Control of the merging
Indeed, we were very pleased that the two different DEMs, extracted from 100% independent sources (TerraSAR-X and SPOT 5) by 100% different and independent teams -though equally skilled and experimented- in Germany and France proved extremely consistent, as demonstrated by the histogram of elevation differences (Figure 4).
The overall bias between the two DEMs is 0.80m (TSX higher than HRS), and more than 90% of the differences are less or equal to 4m. Thanks to this excellent result, no unbiasing/elevation adjustment was applied during the merging process.
### Absolute validation of the DEM vs ICESAT data
From _[PERSON] and [PERSON]_ (2005), the Geoscience Laser Altimeter System (GLAS) on the Ice, Cloud, and land Elevation Satellite (ICESat) provides a globally-distributed data set well suited for evaluating the vertical accuracy of digital elevation models (DEMs). These authors quote a vertical error of \(0.04\pm 0.13\) m per degree of incidence angle. As compared to the Elevation30 accuracy specifications, these figures are small enough to entitle ICESat a perfect data source to assess the AJAX DEM accuracy.
Whatever precise in elevation, the ICESAT measure refers to a near-circular 70m spot on the ground, far larger that the AJAX posting interval (1 arc.second, is approx 30m in Colombia). Therefore, the first step of the production process is filtering the adequate ICESat dataset to carefully select elevations that can be used with reasonable doubt as \"ground truth\". The filtering process is based upon the local slope; land cover is not considered. Please refer to _[PERSON] et al._ (2010) for more details on the filtering and selection process of ICESAT data.
Figure 4: HRS and TerraSAR-X Elevation differences.
Figure 3: Sources of the AJAX DEM.
TeraSAR-X is shown in black, SPOT 5 HRS in white.
After this selection, only 209 ICESAT \"truth points\" were kept, only a few dozens of them laying on the Eastern part of the geocell (Figure 5).
The comparison of AJAX DEM against ICESAT measurements gave the following results:
- - \(\mathrm{Mean}=0.51\mathrm{m}\)
- \(\mathrm{Std}\) deviation = 3.1\(\mathrm{m}\)
- \(\mathrm{approx.}\) 5.0\(\mathrm{m}\) LE90 accuracy.
### \"Vertical Accuracy Commitment\" mask
As for each and every Elevation30 product, the AJAX prototype also includes a Vertical Accuracy mask which provides for each elevation post the best accuracy commitment from the producer (Figure 7).
The methodology to build this layer is detailed in _[PERSON] (2010)_. As shown in the Figure 7 above, our accuracy commitment ranges from 6m LE90 in the flat areas (in green), up to 30m LE90 (in pink) over the extreme slopes. Black dots indicate elevation posts for which no commitment could be taken.
## 6 Conclusion
A merged DEM was produced over a very \"difficult\" area in Colombia, through the merging of two independently produced DEMs from TerraSAR-X and SPOT 5 HRS data.
Validation against ICESAT data, as well as the very low differences between both DEMs, show that the resulting accuracy is in line with our Elevation30 requirements. Therefore, detailed commitments can be taken towards the users regarding the vertical accuracy of the resulting DTED level 2 DEM.
## References
* [PERSON] et al. (2006) [PERSON] et al., 2006, SPOT 5 HRS geometric performances: Using block adjustment as a key issue to improve quality of DEM generation, ISPRS Journal of Photogrammetry & Remote Sensing 60 (2006) 134-136.
* [PERSON] and [PERSON] (2005) [PERSON], and [PERSON] (2005), ICESat validation of SRTM C-band digital elevation models, Geophysical Research Letters, 32, L22S01, doi:10.1029/ 2005 GL023957.
* [PERSON] (2004) [PERSON], [PERSON], 2004, Quality checking of DEM derived from satellite data (SPOT and SRTM), 10 th Annual Conference on Control with Remote Sensing of Area-based Subsidies. Budapest, 24-27 November 2004.
* [PERSON] et al. (2010) [PERSON] et al., 2010, Updating and improving the accuracy of a large 3D database , ISPRS Congress, Kyoto, August 2010, Commission VIII.
* [PERSON] et al. (2008) [PERSON] et al., 2008, Building a consistent geometric frame over Sparse islands using SPOT 5 data, ISPRS Congress, Beijing, August 2008, Commission VII, WG VII/7.
Figure 5: ICESAT “truth points” over the geocell. Three different orbits can easily be identified. Colours show the magnitude of the elevation difference (in meters) between ICESAT measurements and the AJAX DEM.
Figure 6: Histogram (in %) of the elevation differences against ICESAT (in meters).
Figure 7: Vertical accuracy commitment of the AJAX DEM.
|
isprs
|
EXTRACTING PRECISE AND AFFORDABLE DEMS DESPITE OF THE CLOUDS. AJAX: THE JOINING OF RADAR AND OPTICAL STRENGTHS
|
L. Cunin, P. Nonin, J. Janoth, M. Bernard
|
https://doi.org/10.5194/isprsarchives-xxxix-b4-271-2012
| 2,012
|
CC-BY
|
isprs/329d3042_f274_49ba_97ed_a8830edf54d0.md
|
Identification of landslide susceptibility zonation in CNG ghat section, Gudalur, The Nilgiris - Using GIS based ANN/Multi criteria method
[PERSON]\({}^{2}\)
[PERSON]. \({}^{1}\)
\({}^{1}\) School of Civil Engineering, SASTRA University, Thanjavur, Tamil Nadu.
\({}^{2}\) School of Earth & Atmospheric Science, University of Madras, Chennai, Tamil Nadu - [EMAIL_ADDRESS]
###### Abstract
Among the various natural hazards, landslide is the most widespread and damaging hazard. In recent times, throughout a lot of attention is being drawn to evaluate the risk due to landslides. The invention of remote sensing and GIS have been new vistas in the field of geo scientific studies viz. geomorphological mapping, groundwater potential mapping, disaster management etc. The present study has been undertaken to study different thematic maps like, contour, drainage, slope, aspect, curvature, DEM, DTM, drainage density, drainage intensity, geology, lineament, lineament density, lineament intensity, geomorphology, land use, weathering thickness, run off, soil thickness and buffer maps like road, drainage, lineament etc. in CNG ghat section, Gudalur, The Nilgiris. For this purpose, the satellite image IRS - RS2, LISS III January 2014 used to prepare different thematic maps. The contour, drainage and road network were incorporate from Sol Toposheets. The slope, curvature, aspects and buffer maps were prepared from GIS environments. Based on field studies, above said thematic maps (22 nos.) were prepared and were grouped into 3 categories viz. Geology, Hydrology and Terrain. In each category the input maps were assigned different score as well as each layer has been given different weightage. Finally the categories are analysed through multi - criteria analysis to find out 5 different vulnerability classes. The 5 different land susceptibility zones are classified as very low, low, moderate, high and very high. The percentages of area under different susceptibility classes are 3%, 20%, 51%, 25%, and 1% respectively. The locations of small area major landslides and slip locations were calculated from different years using (2010 and 2014) Trimble GPS in the field. The field data was converted into point layer in GIS and landslide inventory map was prepared. This map was superimposed in landslide susceptibility zonation map. As per field data 0%, 9.25%, 57.5%, 32% and 1.25% Slide points are come under very low, moderate, high, very high susceptibility zones respectively.
## 1 Introduction
### Introduction
Landslides and other mass movements are common phenomenon in the hilly region. The general introduction of the landslides, the various types of landslides and causes of landslides are deeply discussed by [PERSON] (1978). Landslide in Nadugani area occur during heavy rainfall - the Devala is received highest rainfall in South India and it is also called Cherrapunji in South India - resulting the loss of natural vegetation, loss of tea plantation, damage in road network. Several factors affect the stability of the slopes & in this paper, the geological factors such as geology, weathering thickness, lineament, lineament density, lineament intensity, lineament buffer, and hydrological factors like drainage, drainage buffer, drainage density, drainage intensity, run-off, and terrain factors like contour, slope, slope aspects, geomorphology, structure, soil, land use, etc. There are numerous approaches for hazard zonation mapping. The study area landslide types are mapped by large scale spatial data, pale scars and field survey, classified and landslide distribution in GIS environment. The use of GIS in the modelling of landslide hazard using many different parameters maps was attempted by several researchers ([PERSON], 2014; [PERSON] et al 1998; [PERSON] et al 2006; [PERSON] et al 2011; [PERSON] et al 2012; [PERSON] et al 2015; [PERSON] and [PERSON] 2016; [PERSON] and [PERSON] 2016).
The present study has been attempted for the identification of spatial distribution of potential slope instability in a representative hill slope over which the attributes of geological factors, hydrological factors and terrain factors.
### Study area
The study area is CNG - 37 ghat road section and its connecting Gudalur and Calicut via Nilampur. It is situated in Nadugani, Gudalur Taluk, The Nilgiris. The study area is enclosed between 11\"30\"N to 76\"15\"N latitudes and 76\"30\"E to 11\"15\"E longitudes. It is covering an area of 4.79 km\({}^{2}\) Figure 1. The elevation varies between 280m to 1042m above mean sea level. This area is experience highest rainfall next to cherrapunji in India. The measured rainfall varies from 1368 mm to 2550 mm per year. In this section most of the area is covered by Hornblend biotic genesis and small portion occupied by charmoctics. The rocks are fissile in nature. The terrain slope is varies from 45\" to 60\". Most of the cover by Tea plantation and south-eastern side dense forest are noticed.
### Data products
The following spatial data has been used for the study.
* Topo sheets No. 58A/7 (1:50,000) and 58A/7-NE (1:25,000)
* IV Geocoded January 2014
* Rainfall data from Taluk office.
## 2 Methodology
The different thematics maps were prepared in various methods. It is given below:
* The themes like Drainage map, Contour map, and road were derived from SOI Toposheets.
* Soil and soil thickness maps were prepared from field data.
* The Land use, geomorphology and lineament maps are interpreted visually from satellite imagery.
* Slope, slope aspect, drainage density, drainage buffer, and drainage intensity were prepared from relief and drainage maps respectively.
* Lineament density, buffer and intensity maps were prepared from lineament map.
* Using empirical formula for Tamil Nadu hills Q= CM23 ([PERSON], 1998), Run-off map was prepared.
* The above prepared maps were geo-referenced and digitized using ARC GIS software.
* Buffer zones were created for drainage map using ARC GIS software.
* The themes have been assigned with proper weightages and scores.
* Multi-criteria analysis of the weighted themes was carried out with Overlay analysis in ARCGIS software to predict landslide vulnerable zones.
The different methodologies adopted for different factors in the present study are explained with the help of different flow charts. Multi-criteria analysis of the weighted themes was carried out with Overlay analysis in ARCGIS software to predict landslide vulnerable zones for hydrological factor (Figure 2). Geological factor analysis, eight thematic layers (Figure 3) was prepared and the map classes occurring on each input map were assigned different scores, as well as maps themselves receiving different weight as before. It is convenient to define the scores in an attribute table for each input map. Third step, important terrain causative factors like, land use, lithology, geomorphology, and structural maps were prepared from high resolution satellite data IRS-RS2 L-IV Feb 2014. In this study, Analytical Hierarchy process method was used to find out susceptibility zonation. Analytical Hierarchy Process is a semi quantitative method, which includes a pair-wise comparison of various landslide-triggering factors to determine prioritized factors weight. Factors weight for all thematic maps was estimated by developing pair-wise comparison matrix as developed by [PERSON] (1990, 1994) and [PERSON] (2001). The detailed methodology of landslide susceptibility zonation has been shown in Figure 4. An artificial neural network is a \"computational mechanism able to acquire, represent, and compute a mapping from one multivariate space of information to another, given a set of data representing that mapping\" ([PERSON], 1994). The back-propagation training algorithm is the most frequently used neural networks method ([PERSON] et al., 2004; [PERSON] et al., 2006; [PERSON] et al., 2007; [PERSON] et al., 2008; [PERSON] et al., 2008) and is the method used in this study.
## 3 Result and Discussion
The hydrological factorial output landslide vulnerable map is shown in Figure 5. It is found from the output map that the vulnerable zones identified are proportional to the hydrological factors. It classifies the vulnerable zones into very high, high, moderate, low and very low based on the weightage and scores assigned to the different themes. The areas such nudgani beta, marala malai, Nadugani and Kil hadagani comes under high vulnerable zones. These areas have very steep slopes and low drainage density. The output map shows that the very high zones are very limited and occupies about 0.96% and high hazard zones occupies 29.44%. As per hydrological factors influence the moderate vulnerable zones occupies 55.80%. Low and very low occupies zones about 13.30%, 0.50%.
The geological factorial maps are converted into GIS environment and converted into quad tree raster maps. Using multi-creation analysis, weightages are assigned to each thematic layer. Weightage of each theme was provided depending on the severity of the theme related to landslide susceptibility. The scores are assigned to each classes present
Figure 1: Study area mapin thematic layers. Higher score for a class, more the susceptibility to landslide compared to others. Based on the multi-creation analysis, the study area is classified into five landslide susceptibility zones (Figure 6). In the study area, very high and the high classes are characterized by tea plantation area indicating the influence of manmade activities. In southern portion the high and very high hazard classes are characterized by thick forest with high degree of slopes (more than 50\({}^{\prime}\)). The general slope of the area is NW direction, but the micro slopes are facing SW showing a greater to potential landslide susceptibility.
Natural terrain factors play a vital role of occurrence of landslide in CNG ghat section. CNG ghat section covers 18.5 km stretch in Nadugani Beta Range. This ridge is controlled by two major fault systems in eastern (Punna puzha) and western (Karakkodu puzha) side. The major lineaments are parallel to Eastern ghat Orogency ([PERSON] and [PERSON], 2001). In Nadugani beta western slope, some places are steep and very steep in nature compared to the eastern slope. The rocks are steep and fissile in character. In rainy days, water enters into the fissures and debris becomes unstable. It generates slope instability in this ghat section. The study indicates that the ghat section occurred 22.6% of very high zone, 52.52% of high zone, 2.55% of moderate zone and 22.32% of low zones (Figure 7). The map also is used as basic data for further developmental activities like road widening etc. in this area.
Based on hydrological factors, geological factors and terrain factors maps were assigned different score and as well as each layers has been given different weightage. Finally five different land susceptibility zones are classified as very low, low, moderate, high and very high (Figure 7). The percentages of area under different susceptibility classes are 3%, 20%, 51%, 25%, and 1% respectively. This map was superimposed in landslide inventory map. As per field data 0%, 92,25% 57.5%, 32% and 1.25% slide points are come under very low, low, moderate, high, very high susceptibility zones respectively.
Figure LSZ map using hydrological, geological, and terrain factors
## References
* [[PERSON] and [PERSON]] [PERSON] and [PERSON] [PERSON] 2001 Structure and its impact on the drainage in part of Ponnajyar river basin, Tamil Nadu using Remote Sensing Techniques. Journal of Indian Society of Remote Sensing, 29(4), pp.187-195.
* Konkan Railway, Ratanigui Region, Maharatra, International Symposium On \"Geospatial Databases for Sustainable Development, Goa, India, September 27-30, IAPRS
- SIS, Vol. 36, Part 4, pp.582
* [[PERSON]] [PERSON], 1994, Where and why artificial neural networks are applicable in civil engineering. J Comput Civil Eng 8, pp129-130.
* Case Study from Bodi
- Bodimetuta Ghats Section, Theni District, Tamil Nadu
- India, Journal of Indian Society of Remote Sensing, Digital Object Identifier (DOI) 10.1007/s12524-011-0112-4.
* [[PERSON] et al.2004] [PERSON], [PERSON], [PERSON], and [PERSON] 2004, Determination and application of the weights for landslide susceptibility mapping using an artificial neural network. Eng Geol 71(3/4), pp 289-302.
- Gudalur Ghat section, Gudalur, The Nilgiris, Tamil Nadu, International Journal of ChemTech research, 9(3), pp 248
* [[PERSON] et al.1998] [PERSON], [PERSON], [PERSON] and [PERSON], 1998, Temporal remote sensing data and GIS application in landslide hazard zonation of part of Western Ghat, India. Int. Jour. Remote Sens., 19(4), pp.573-585.
* [[PERSON] et al.2008] [PERSON], [PERSON], and [PERSON], 2008, An assessment on the use of logistic regression and artificial neural networks with different sampling strategies for the preparation of landslide susceptibility maps. Eng Geol 97(3/4), pp 171-191.
* [[PERSON]] [PERSON], 2014, Landslide susceptibility Zonation mapping using Logistic Regression and its validation in Hashtchin Region, Northwest of IRAN, Journal Geological Society of India, 84(1), pp. 68-86.
* [[PERSON] and [PERSON]] [PERSON] and [PERSON] 2001, Models, methods, concepts and applications of the analytic hierarchy process (1\"ed.). Boston: Kluwer, pp.333.
Figure LSZ map using hydrological, geological, and terrain factors
[PERSON]. AND [PERSON], 2016, Landslide Susceptibility Zonation mapping using Multi-criterion Analysis - CNO 37 plant section, Nadugani, [PERSON], The Neligris - Using Geological Factors, International Journal of Earth Science and Engineering, 9(4) **In-press**.
* [PERSON] et al. (2006) [PERSON], [PERSON], [PERSON] [PERSON], and [PERSON], 2006, Estimation of rock modulus: for intact rocks with an artificial neural network and for rock masses with a new empirical equation. Int J Rock Mch Min Sci 43(2), pp 224-235.
|
isprs
|
IDENTIFICATION OF LANDSLIDE SUSCEPTIBILITY ZONATION IN CNG GHAT SECTION, GUDALUR, THE NILGIRIS – USING GIS BASED ANN/MULTI CRITERIA METHOD
|
S. Prasanna Venkatesh, S. E. Saranaathan
|
https://doi.org/10.5194/isprs-archives-xlii-5-871-2018
| 2,018
|
CC-BY
|
isprs/225089d1_62d6_4b2a_81ff_d963ac136349.md
|
Sugarcane productivity estimation through processing hyperspectral signatures using artificial neural networks
[PERSON]\({}^{1}\)
[PERSON]\({}^{1}\)
[PERSON]\({}^{1*}\)
\({}^{1}\) Remote Sensing Research Group, Universidad del Valle, Santiago de Cali, Colombia (espinosa.camilo, sergio.velasquez, francisco.hernandez)@correounivalle.edu.co
###### Abstract
This project uses an artificial neural network to calculate the net primary productivity of an organic sugarcane crop in [PERSON]'s farm, in Cerrito, Valle del Cauca. The pilot scheme used in this project is composed by 6 treatments of nitrogen fertilization based on green manures (poultry manure and cowpea). During the last two crops' phenological phases, the artificial neural network was provided with hyperspectral data collected in the field. In addition, an exploratory data study was implemented in order to identify anomalous signs related to the light saturation and the curvature geometry. The first network applied was Autoencoder, in order to reduce the dimensionality of the radiometric resolution of the data. The second network applied was Multilayer Perceptron (MLP), to calculate the productivity values of the patches. After having compared the actual productivity values provided by Cenciana, this project obtained an accuracy of 91.23% in the productivity predictions.
Neural Networks, Net Primary Productivity, Deep Learning, Backpropagation, Hyperspectral Signatures, Sugarance. +
Footnote †: journal: Computer Vision and Pattern Recognition
0000-0001-8980-0000-0000][PERSON]
0000-0001-8980-0000][PERSON]
0000-0001-8980-0000][PERSON]
## 1 Introduction
Sugarcane is one of the perennial crops with the highest organic matter rate per unit area, as a result, it is one of the most productive crops in the agricultural market around the world ([PERSON] et al., 2013). Colombia is placed 13 th among the sugarcane-producing countries and has the first position in productivity. The Colombian sugar sector is located in the valley of the Cauca's river, which covers 47 municipalities from the north of Cauca and central Valle del Cauca, to the south of Risarsalda. In this region there are 225,560 betecares planted with sugarcane; 25% of them belong to sugar mills and the remaining 75% correspond to around 2,750 cane growers (Asocada, 2018).
Biomass estimation methods can be categorized as either destructive or non-destructive sampling techniques. Remote sensing is a non-destructive sampling defined as a set of techniques used to read an object's spectral information based on the way it interacts with energy, which is recorded by sensors ([PERSON], 2015).
Remote sensing has facilitated crops supervision by providing permanent condition data in large areas ([PERSON], et al., 2010). However, the multispectral data obtained by satellite or drone imagery can't be as detailed as the hyperspectral data ([PERSON], et al., 2009). Spectrometry techniques allow to obtain hyperspectral data along the electromagnetic spectrum, showing significant spectral patterns in different regions, which are related to plant phenology and are used to facilitate both management and productivity estimation ([PERSON], [PERSON] & [PERSON], 2001).
Artificial neural networks (ANN) are an automatic learning model based on biological neural networks and connections. These systems consist of a set of elements or neurons that connect to each other to send information to each of the nodes where the error spreads depending on the weight of the connections. There are different designs of neural networks, which can be used to perform many activities. The multilayer perceptron (MLP) model is frequently used in Deep Learning, due to its potential to classify, to predict, and to how easy it is to operate ([PERSON] & [PERSON], 2009).
MLP is based on a backpropagation algorithm, which distributes the error of the output layer, in the hidden layers. This network architecture is made up of an input layer, at least a hidden and an output layer. The connections between these types of neural networks are usually either FeedForward or sequential type, which means that all the neurons in the input layer are connected to every neuron in the hidden layer. On the other hand, autoencoders are a type of neural network that uses the same sequential model used by the MLP, however, its design varies by conserving a smaller number of neurons in the hidden layers than in the output layers. Autoencoders are based on the dimensionality reduction obtained from principal component analysis (PCA).
PCA take the most important features in the input data and reduce them through linear transformations. Although the autoencoders and the PCA are similar, the autoencoders employ a nonlinear component analysis, due to the high variety of data. The Autoencoder is composed of two regions, encoding, where the data is transformed reducing its dimensionality and then transporting the essential information from the data group to the hidden layers ([PERSON], 2011). The decoding layer is the network part where the data is reconstructed following the input layer scheme.
This study applies the net primary productivity model (PPN) proposed by [PERSON], (1981), which is based on the plant's response to photo synthetically active radiation (PAR) and the most determining factors of the crop, such as environmental variables, plant phenology and physiology and behavior throughout time. The values of PPN, the set of hyperspectral data and the information of the physiological variables of the crop, were used to provide a neural network that allowed to estimate the productivity.
## 2 Study Area and Materials
### Study Area
The study area is identified as the 758B crop, located at [PERSON]'s farm, in Cerrito, Valle del Cauca, Colombia. The study area is located in the geographical coordinates 3 \({}^{\circ}\) 38 24\({}^{\circ}\) N, 76 \({}^{\circ}\) 19 48\({}^{\circ}\) W. The average temperature of the region is approximately 25\({}^{\circ}\) C. The 758B crop corresponds to an experimental sugarcane crop that contained 30 organic patches as presented in Figure 1.
### Spectroradiometer EPP2000
The reflectance measured in each of the sampled plants was captured with a portable equipment that performs spectral measurements between 200 nm and 1100 nm. This range covers the ultraviolet, visible and near infrared spectrum. The plants reflectance is measured with a spectral resolution of 0.5 nm. The equipment was coupled with the fiber optic F400 - VISNR of StellarNet Inc, this probe has an aperture of 400 nm and a field of view of (FOV) of 25.4 \({}^{\circ}\). The spectrometer used is shown in Figure 2. (StellarNet Inc, 2014).
## 3 Methodology
### Sampling and processing of hyperspectral data
The spectral signatures used in this project were obtained from the reflectance measured with the EPP2000 spectrometer (StellarNet) between 200 to 1100 nm. The optical fiber coupled to the equipment has a vision field (FOV) of 25.4\({}^{\circ}\), which captured an area of 2 cm of radius at a distance of approximately 8 cm. The data were collected in July 10 th, 2018, in the 10 th month of the sugarcane phenological cycle. The campaign was held between 10 a.m. and 3 p.m. for the purpose of maintaining an angle of 0 - 30 \({}^{\circ}\) between the sun and the zenith.
To avoid the edge effect, the samples were taken 20 m after the border edge of the two central grooves (Figure 3). At this point, 10 plants that didn't show spots, diseases or lesions at the foliar level were chosen out of the rest. Additionally, those plants were properly developed taking into account their growth phase. From each plant a spectral signature was obtained by pointing the optical fiber on the TVD leaf.
The spectral signature corresponds to an average of 5 measurements of the same leaf. Before starting the evaluation, the minimum and maximum reflectance were calibrated with the spectralo to reduce samples variation ([PERSON], [PERSON], 2016).
A standardization process was carried out in order to introduce the spectral signatures into the neural network. The data were scaled according to the upper and lower limit of the data range with a variation from 0 to 1.
### Artificial neural networks
#### Autoencoder
Autoencoder is a type of neural network that uses the same sequential model used by the MLP, however, its design changes due to the low quantity of neurons found in the hidden layers in comparison to the output layers. The autoencoder is based on the dimensionality reduction obtained from the analysis of main components (PCA), which take the greatest weight
Figure 1: Location of experimental crop (758B)
Figure 3: Selection of central grooves in each plot
Figure 2: Spectroradiometer EPP2000.
characteristics in the input data and reduce them through linear transformations.
Although the autoencoder and the PCA are similar, the autoencoder uses a nonlinear component analysis due to high data variation. As shown in Figure 4, the autoencoder is composed of two regions: the encoding layer and the decoding layer. The encoding layer is where the data is transformed reducing its dimensionality and then transporting the important information of the group data to the hidden layers. On the other hand, the decoding layer, is where the network rebuilds the data following the scheme of the input layer.
During the process, the 300 sampled spectral signatures (each with 801 wavelengths) were organized in a 300x801 matrix. 80% of the data set is used for network training and the other 20% is used for prediction and validation of the productivity estimation model. The matrices of an 240x801 and 60x801 entered to an auto-encoder with an input layer of 801 neurons and then to a hidden or encoded layer with 40 neurons. The layer encoded with 40 neurons within the autoencoder architecture results in two matrices: 240 x 40 and 60 x 40. Finally, the network has an output (or decoded layer) with 801 neurons. The hidden layer has an exponential linear unit (ELU) activation function (Eq. 1), which is a variation of the rectified linear unit (ReLU) function.
\[f(x)=\left\{\begin{matrix}\alpha(e^{x}-1)~{}para~{}x<0\\ x~{}para~{}x>0\end{matrix}\right. \tag{1}\]
Where: e = exponential constant
x = input value
The main difference between the two activation functions is the input values. When they are close to 0 or negative, ReLU has a gradient that turns them into 0 and the network can't propagate the error backwards, it means the neural artificial network can't be trained with the input data. On the other hand, the ELU function provides a slope in the negative quadrant of the function, then neurons are activated with these values and obtaining a more accurate result for problems with this type of data.
The Sigmoidal activation function (Eq. 2) was implemented in the hidden layer, this allows a constant learning rate, avoiding slow rhythms where the network can remain stuck in a local minimum; also avoids high rhythms where instability in the error function is generated, with jumps in the weights close to the minimum that don't allow to reach it.
\[f(x)=\frac{1}{1-e^{-x}} \tag{2}\]
### Multilayer Perceptron
The MLP is a widely used neural network. It is based on a back propagation algorithm, which distributes the error of the output layer, in the hidden layers. This network architecture is made up of an input layer, at least one hidden layer and one output layer. Connections between these types of neural network are generally either FeedForWard or sequential, which means that the input layer neurons are connected to the neurons in the hidden layer, as shown in Figure 5. The mathematical concept of this class of neural network is presented in equation 2 ([PERSON], [PERSON], 2014)
Autoencoder showed two matrices: 240 x 40 and 60 x 40, which store the compression of the 801 wavelengths. The productivity vectors, obtained in 2016, 2017 and 2018 harvests, are added to the new matrix before being incorporated into the MLP; obtaining matrices of 240x41, 240x42, 240x43 and 60x41, 60x42, 60x43 respectively. The architecture of the neural network is composed of an input layer containing 41 neurons, 2 hidden layers with 41 neurons each and finally, an output layer with a single neuron, which returns the productivity estimation within vector 60x1. The first hidden layer contains a hyperbolic tangent activation function; the second layer has an ELU function and the output layer was assigned a sigmoidal activation function.
## 4 Results
The spectral signature shown in Figure 6 is the average result of the hyperspectral signatures obtained during the data acquisition campaign. This hyperspectral signature represents the spectral response's behaviour of the plant
Figure 4: Autoencoder Architecture
Figure 5: Architecture Perceptron Multilayer
in different regions of the electromagnetic spectrum. During the process of photosynthesis, the chlorophyll pigments absorbed blue and red light, showing absorption peaks at 490 and 660 nm, respectively ([PERSON] and [PERSON] 2009). This behavior is presented graphically at the minimum of the curve, where the reflectance has lower percentages. The green region is made up of bands where one of the maximum reflectance sites is generated, this represents the foliar surfaces of vegetation at 550 nm. This maximum is caused by the low absorption of radiant energy, which produces a green pigmentation in plants.
As seen in Figure 6, the highest reflectance values of the hyperspectral signature are at the near infrared region (NIR), due to the vegetation's health reflects the reflectance found in the wavelength. This region is commonly used to classify vegetation and identify stress on crops. Finally, the Red-Edge band, which is in the middle of infrared and red spectrum, shows a high inclination where absorption levels decrease as it moves towards larger wavelengths.
To estimate the productivity of sugarcane, two neural networks were implemented. At the beginning, the data is added to the Auto-encoder through the matrices of 240801 and 608801, where 95% of compression of the data is obtained using the synaptic weights calculated in the encoding phase, reducing the matrices to 40 columns. The learning process of the neural network is presented in Figure 7. The overall fit of the learning model had a mean square error (MSE) value below 0.02 for 50 iterations. This value reduces as the iterations increase, finding an accuracy close to 0 for 400 iterations.
The productivity vectors of the previous crops were added to the new matrices to be included in the MLP. The new training matrix is 240x43, where columns 41, 42 and 43 contain the productivity delivered by Cenicaha in 2016, 2017 and 2018 harvest, respectively. 20% of the remaining data is divided into a 60x42 matrix that contains in columns 41 and 42 the productivity of 2016 and 2017 harvests respectively, this matrix is used to make the 2018 productivity prediction. The other 60 productivity data, stored in a vector, is used to validate the model with the predicted productivity.
In this new neural network, data was used to estimate the biomass in the crop. The training of the MLP is shown in Figure 8. The learning process is presented in a slower way than in the autoencoder, where the MSE values below 0.02 were given only until 300 iterations, the graphic also shows a noise due to the dimensionality and the amount of training data.
The low MSE values in the learning models of the neural network (presented in the graphs) refer to the backpropagation algorithm in which the architectures are based. In the supervised learning process, this algorithm modifies the synaptic weights in its layers to obtain the closest result to the output data provided.
The prediction of the productivity model is developed by including in the neural network the matrix 60x42, that has the compressed reflectance and the productivity of the 2016 and 2017 harvest. The neural network returns a vector with 60 estimated productivity values that are compared with the real productivity values of each of the plots related to the compressed spectral signatures. The evaluation of the final estimation model has an absolute mean error (AME) of 8.06 t / ha and an RMSE of 10.43 t / ha, giving an accuracy of 91.08%. By adding only the 2018 harvest productivity, the estimate had an accuracy of 85.37% and 88.47% with the productivity of 2017 and 2018 when considered in training.
Figure 8: MLP precision model
Figure 6: Average spectral signature of crop 758B
Figure 7: Autoencoder precision model
Figure 8 shows the linear regression obtained between the estimated productivity with the MLP and the actual data productivity provided by Cenicaha. The regression obtained a coefficient of determination R2 to 0.7388, which indicates a low dispersion of the data, in which the error is distributed evenly in the plots. The linear regression has a correlation directly proportional to the precision values obtained in the estimate. The productivity estimation is globally calculated on the crop without discriminating the different nitrogen fertilization treatments, so the linear regression doesn't allow a better adjustment.
## 5 Conclusions
In this project, two types of artificial neural network were evaluated with the purpose of estimating sugarcane productivity. Initially, the autoencoder achieved a correct compression of the reflectance data to 95%, reducing the dimensionality of the input matrix to only 40 columns that keep the spectral crop information. A compression greater than 95%, generated a higher loss of collected data; in addition, it increased noise in the autoencoder learning process returning matrix's columns without values.
The normalization of the input data is a process that standardizes the scale of variables that enter into the input layer of the neural network and reduces the computation workload. Thus, the training process is more efficient and the results are more accurate. This procedure avoided biases that could occur due to outliers corresponding to the scalar variation of the inputs.
The autoencoder was a solution for the problem of dimensionality caused in an MLP when entering a matrix with a greater number of variables from the collected samples. The 300x40 matrix and the productivity vectors used to train the MLP, enabled a productivity estimation with an accuracy between 85 to 92%. The inclusion of productivity data from previous crops made possible an improving for the performed estimations. Also, a prediction accuracy of 85.37% was obtained using only the 2018 harvest. Besides that, the prediction accuracy was increased to 88.47%, including the productivity of 2017. Finally, when the 3 productivity vectors of 2016, 2017 and 2018 harvest were included, the accuracy of the prediction increased to 91.23% due to the consideration of productivity variation in each year. This variation indicates that productivity behavior is a phenomenon that doesn't follow a sequential pattern over time even under experimental conditions, therefore, the use of a greater amount of information from previous periods, brings more precise behavior modelling trends to obtain more accurate results.
## References
* [PERSON] et al. (2009) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (2009, July). Hand-held spectrometry for estimating thrips (Fulmekiola serata) incidence in sugarcane. In 2009 IEEE International Geoscience and Remote Sensing Symposium (Vol. 4, pp. IV-268). IEEE.
* [PERSON] & [PERSON] (2009) [PERSON], & [PERSON], 2009. Caracterizacion de firma spectral a partir de sensors remotos para el manejo de sanidad vegetal en el cultivo de Palma de Aceite. Revista Palmas, 30(3), 63-79.
* Asocana (2018) Asocana, 2018. Aspectos Generales del Sector Agronidustrial de la Caha 2017-2018.
* [PERSON] et al. (2010) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (2010). Spatio-temporal variability of sugarcane fields and recommendations for yield forecast using NDVI. International Journal of Remote Sensing, 31(20), 5391-5407.
* [PERSON] & Lopez (2009) [PERSON], [PERSON] [PERSON], 2009. A practical approach to artificial neural networks, Editorial Universidad del Valle, Santiago de Cali.
* [PERSON] et al. (2001) [PERSON], [PERSON], & [PERSON] (2001). Estimating the foliar biochemical concentration of leaves with reflectance spectrometry: testing the Kokaly and Clark methodologies. Remote Sensing of Environment, 76(3), 349-359.
* [PERSON], & [PERSON], 2013, Enchanced Processing Of 1-Km spatial resolution fAPAR time series for sugarcane Yield Forecasting And monitoring. Remote Sensing, 5(3), 1091-1116.
* [PERSON] (2015) [PERSON], 2015, Comparacion de metodos de clasificacion de imagenes de s'atelite en la cuenta del Rio Argos (Region de Murcia), 327-348.
* [PERSON] (2015)[PERSON] (2011). Sparse autoencoder. CS294A Lecture notes, 72(2011), 1-19.
* [[PERSON] et al.(2016)] [PERSON], [PERSON], & [PERSON] (2016). Hyperspectral sensing to detect the impact of herbicide drift on cotton growth and yield. ISPRS Journal of Photogrammetry and Remote Sensing, 120, 65-76.
* [StellarNet(2014)] StellarNet, 2014, Miniature Spectrometer Manual.
|
isprs
|
SUGARCANE PRODUCTIVITY ESTIMATION THROUGH PROCESSING HYPERSPECTRAL SIGNATURES USING ARTIFICIAL NEURAL NETWORKS
|
C. E. Espinosa, S. Velásquez, F. L. Hernández
|
https://doi.org/10.5194/isprs-archives-xlii-3-w12-2020-177-2020
| 2,020
|
CC-BY
|
isprs/6ebf3a56_5417_48c8_b5d6_bedb15104fdf.md
|
# Thunderststorm weather analysis based on XGBoost algorithm
[PERSON], [PERSON]
1 College of Surveying and Mapping Geographic Information,Guilin University of Technology,Guilin,Guangxi
2 College of Software and Internet of Things Engineering,Jiangxi University of Finance and Economics,Nanchang,Jiangxi
###### Abstract
Using the GPS data service platform of China seismological bureau to get the ZTD separated ZWD data pair and the content in the air, and by detecting the O\({}_{3}\) value in the air is an effective method to analyze and study the thunderstorm weather.This paper collected the four foundations of the beibu gulf region GPS station in 10 days in August 2019 data, through ZWD numerical and O\({}_{3}\) values after consolidation, the classification of the training and testing, in XGBoost algorithm, manual adjustment method is compared with grid search method, and the results show that the model of manual adjustment method is superior to grid search model and the default model in accuracy and AUC value.
Thunderstorm weather, ZWD, O\({}_{3}\), XGBoost algorithm, Manual adjustment method, Grid search method +
Footnote †: Corresponding author: [PERSON] - email: [PERSON]MAIL_ADDRESS]
Thunderstorm weather, ZWD, O\({}_{3}\), XGBoost algorithm, Manual adjustment method, Grid search method
## 1 General instructions
Thunderstorm weather is a kind of meteorological disaster, and cloud lightning is a form of thunderstorm weather, which endangers people's lives and poses a great threat to property safety.At present, many researchers in China use advanced machine learning and data mining to study thunderstorm prediction models.For example,Based on the BPSO-NBayes classifier, the literature studies the lightning prediction technology of the station[11].Cumulominus clouds are usually highly developed. In low latitudes, the top of the cumulonibus cloud can reach 18 km, which can reach the height of troposphere. There is a large amount of electric charge on the cumulonimbus cloud.The literature indicates that there is a clear explanation between altitude and lightning current parameters[27].Thunderstorm weather needs a large amount of water vapor as support. Water vapor is positively correlated with precipitatable water vapor, while ZWD is proportional to K of precipitation,The literature has done a good job of this[31].It is pointed out in the literature that the tropospheric parameter model is used to forecast the thunderstorm trend[41].However, there are few studies on thunderstorm weather threshold using tropospheric parameters and ozone as reference conditions.When a thunderstorm discharges, some of the oxygen in the air is converted to ozone.By integrating ZTD separated ZWD and detected O\({}_{3}\) values and allocating them to the training set and test set in proportion, the model with high accuracy and good classification effect can be obtained by applying XGBoost algorithm and adjusting its own parameters.
## 2 XGBoost algorithm
### XGBoost algorithm idea
XGBoost belongs to the category of Boosting algorithm, where the idea is to integrate many weak classifiers together to form a strong classifier.XGBoost, on the other hand, is an ascension tree model, so its also the product of multiple book models integrated into a powerful classifier.The tree model we used is CART regression model.The literature has a good explanation of CART regression model.XGBoost basically grows a tree by constantly splitting features on the observing system, adding one tree at a time, learning a new function and fitting the residual of the previous prediction.All the trees separated from the observation system were trained to predict a sample score (according to the corresponding characteristics of the sample, there will be a corresponding score when each tree falls on the corresponding leaf node), and finally the corresponding score of each tree was summed up.
### XGBoost algorithm principle
\[\hat{\mathcal{Y}}_{i}=\sum_{j}w_{j}x_{ij} \tag{1}\]
\[f_{i}(x)=w_{q}(x),w\in R^{T},q:R^{d}\rightarrow\{\text{l,2,}\cdots T\} \tag{2}\]
\[\hat{\mathcal{Y}}_{i}=\sum_{k=1}^{K}f_{k}(x_{i})\,,\,\,\,f_{k}\in F \tag{3}\]
Where \(w_{j}\)= the weight of the Jth sample
\(x_{i}\)= the sample corresponding to leaf node
\(\hat{t}(x)\)= regression tree
\(w_{i}(x)\)= the score of q on the leaf node
\(\hat{\mathcal{Y}}_{i}\)= the trees split off by the observation system are summed to give a predicted total score
The core algorithm of XGBoost is:\[Obj=\sum\
olimits_{i=1}^{n}\!\!I(y_{i},\overset{\cdot}{y_{i}})+\sum\
olimits_{k= 1}^{K}\Omega(f_{k}) \tag{4}\]
The first part is used to measure the difference between the predicted score and the real score.In the superposition process of the algorithm, all decision trees need to be taken into account. In order to ensure the improvement of the algorithm, it is necessary to ensure that the current function plus a new function expression, and the overall mean square deviation of the algorithm shows a downward trend.The other part is the regularization term:
\[\Omega(f_{i})=\gamma T+\frac{1}{2}\lambda\sum\
olimits_{j=1}^{T}W_{j}^{2} \tag{5}\]
where \(\gamma=\text{penalty}\)
T = Node number of leaves
\(\lambda=\text{L2}\) regularization factor for prevent overfitting
As mentioned above, the newly generated tree needs to fit the residual of the previous prediction. When the tree t is generated, the expression of the prediction score is:
\[\overset{\cdot}{y_{i}}(t)=\overset{\cdot}{y_{i}}(t-1)+\fint_{t}(x_{i}) \tag{6}\]
At the same time, we rewrite the objective function as:
\[t^{(t)}=\sum\limits_{i=1}^{n}\!\!I(y_{i},\overset{\cdot}{y_{i}}(t-1)+f_{t}(x_ {i}))+\Omega(f_{t}) \tag{7}\]
Next, we need to find an algorithm that can minimize the objective function. XGBoost's algorithm USES [PERSON]'s second order expansion to approximate it, so the objective function can be approximated as follows:
\[\hat{t}^{(t)}\cong\sum\limits_{i=1}^{n}\!\!I(y_{i},y^{-(t-1)})+g_{i}f_{t}(x_ {i})+\frac{1}{2}\hat{t}f_{t}^{2}(x_{i})+\Omega(f_{t})+C \tag{8}\]
where, g is the first derivative,h is the second derivative.
For formula (8),It's the same thing as the sum of all the previous models, but it's just a fixed value for formula (8), and we put it in the constant term.And formula (8) can be simplified to formula (9).
\[\widetilde{t}^{(t)}=\sum\limits_{i=1}^{n}[g_{i}f_{t}(x_{i})+\frac{1}{2}\hat{ h}_{i}f_{t}(x_{i})]+\Omega(f_{t})+C \tag{9}\]
It has no effect on the optimization solution. Remove the constant term and sum the loss of each sample of formula (9), and each sample will fall into a leaf and fall into a leaf node. Therefore, we can recombine all the samples in the same leaf node:
\[Obj^{j}= \sum\limits_{i=1}^{n}\!\!I(x_{i})+\frac{1}{2}h_{i}f_{t}(x_{i})]+ \Omega(f_{t}) \tag{10}\] \[= \sum\limits_{i=1}^{n}\!\!g_{i}w_{i(x_{i})}+\frac{1}{2}h_{i}w^{2 }_{i(x_{i})}]+\gamma T+\frac{1}{2}\lambda\sum\
olimits_{j+1}^{n}w_{j}^{2}\] \[= \sum\limits_{i=1}^{n}\!\![(\sum\
olimits_{i,j}g_{i})w_{j}+\frac{ 1}{2}(\sum\
olimits_{i,j}h_{i}+\lambda)w_{j}^{2}]+\gamma T\]
By rewriting the above equation, the objective function can be rewritten as a unary quadratic function of leaf node fraction, and the optimal sum of objective function values can be solved simply by using the vertex formula directly.we can figure out a optimum value of w and objective function.
\[w_{j}^{*}= -\frac{G_{j}}{H_{j}+\lambda} \tag{11}\]
\[Obj= -\frac{1}{2}\sum\
olimits_{j=1}^{T}\frac{G_{j}^{2}}{H_{j}+\lambda}+ \gamma T \tag{12}\]
### Split-node algorithm
Constructing a decision tree based on spatial segmentation is a NP hard problem, and it is impossible to traverse all tree structures. Therefore, XGBoost algorithm uses the same idea as CART regression tree, and uses greedy algorithm to traverse all feature segmentation points of all features, except that the objective function value above is used as the evaluation function.The specific method is that the value of the objective function after splitting is greater than the gain of the objective function of the moandal leaf node, and a threshold is added to limit the growth depth of the tree. Only when the gain is greater than the threshold, can the tree split.
### Prediction model based on XGBoost algorithm
In this paper, according to the observation of ZWD and ozone in the air at the time of thunderstorm and non-thunderstorm, the threshold of producing thunderstorms is analyzed and predicted.The flow chart of the design is as follows:
#### Data prediction model
AUC evaluation model
#### Data prediction model
The data sources in this paper are from the local thunderstorm hours provided by GNSS data products of China earthquake administration,ozone data provided by
Figure 1.XGBoost prediction model flow chart
Environmental Knowledge Service System, and ZWD data provided by the GPS data service platform of China seismological bureau.The data were collected from August 1 to 10, 2019.In this paper, four ground-based GPS stations are established in Beibu Gulf area of the city,including Nanning,Behai,Zhanjiang,Haiko.
#### 2.4.2 Data pre-processing
All thunderstorm periods of the 4 ground-based GPS stations collected in the Beibu gulf were labeled 1 and 0 for non-thunderstorm periods.In the case of large sample size, there will be enormous number of randomly selected combinations, and the random combination of different data will also have an impact on the production of results. Therefore, the exhaustion method has no substantive significance in this paper, so it is excluded.In this paper,the ZWD data and ozone detection values are integrated together (each point is in hours) under the premise of considering the time sequence, and distributed to the training set and the test set on a 9:1 ratio.
#### 2.4.3 Parameter Optimization
The results of the XGBoost algorithm depend heavily on parameters, including task parameters, general parameters, and auxiliary parameters.Task parameters determine the type of ascending model.Auxiliary parameters are determined by the ascending model.Starting from the perspective of general parameters, this paper will conduct experiments and analysis on the algorithm of default, manual parameter adjustment method and grid search method in XGBoost algorithm.To optimize the AUC value as the goal of grid search, by setting the parameter range search step size, in the parameter range to find the best parameters.Booster of the two types, gbtree and gbLinear model, the paper default is gbtree.The following table is the general parameters involved in the paper:
\begin{tabular}{|c|c|} \hline Parameter & explanation \\ \hline min\_child\_weight & The minimum weight sum of all \\ & observations of a subset \\ \hline gamma & The decrease of the minimum loss \\ & function required for node splitting \\ \hline scale\_pos\_weigh & Determine the minimum leaf node \\ & sample weight sum \\ \hline max\_depth & The maximum depth of each tree \\ \hline n\_estimators & Number of Iterations \\ \hline \end{tabular}
Tabel 1.Partial parameter specification table
### Experimental analysis and results
According to the partition of the data in section 2.12, the data is trained and tested. The data is put into the XGBoost algorithm and compared in two ways. The first one is analyzed according to the accuracy rate, and the other one is analyzed according to the AUC value.Firstly,Based on the accuracy, the default model is compared with the model of manual adjustment method as the following figure:
Secondly, the optimal parameters obtained by grid search method,according to that parameters,we manually adjusted the parameters around 10 times before taking the average,the comparison between the grid search method and manual adjustment method based on AUC value,as the following figure:
number of iterations in the AUC had lower scores than manual adjustment method, the default model on the prediction precision is superior to manual adjustment method, but the classification effect than manual adjustment method.With the AUC value as the standard, the AUC value of grid search method is far better than the average value of nearby manual adjustment method, and the overall prediction accuracy is 2.2% lower.
## Acknowledgements
This work was sponsored by the National Natural Foundation of China (41664002; 41704027); Guangxi Natural Science Found of China (2018 GXNSFAA294045; 2017 GXNSFDA1980 16; 2017 GXNSFBA198139); the\"Ba Gui Scholars\" program of the provincial government of Guangxi;and the Guangxi Key La boardary of Spatial Information and Geomatics (14-045-24-10;16-380-25-01)
## References
* [1][PERSON], [PERSON], [PERSON], [PERSON]. Node Splitting of R-tree with Form and Position (2017) Multi-objective[J]. _Modular Machine Tool & Automatic Manufacturing Technique_. 2017.
* [2][PERSON], [PERSON]. Lightning a city prediction based on IFCM-T-S[J]. _Foreign Electronic Measurement Technology_. 2019.
* [3][PERSON], [PERSON], [PERSON], [PERSON]. Intelligent Parameter Adjustment XGBoOST and Its Application in Telecom Marketing[J]. _Monthly Focus_. 2018.
* [4][PERSON], [PERSON]. Research on Alex Net Improvement and Optimization Method[J]. _Computer Engineering and Applications_. 2019.
* [5][PERSON], [PERSON], [PERSON], [PERSON], [PERSON]. Research on forecasting technology of thunderstorm interpretation based on BPSO-NBayes[J]. _Journal of the Meteorological Sciences_. 2018.
* [6][PERSON], [PERSON], [PERSON], [PERSON]. Research on BP-ANN Models of Lightning Prediction with Spatio-temporal Characteristics[J]. _Computer and Modernization_. 2019.
* [7][PERSON], [PERSON], [PERSON], [PERSON]. The Improvement and Application of Xgboost Method Based on the Bayesian Optimization[J]. _Journal of Guangdong University of Technology_[PERSON] 2018[PERSON]
* [8][PERSON], [PERSON], [PERSON], [PERSON], [PERSON]. Effect of Altitude on the Parameters of Lightning Current[J]. _Insulators and Surge Arrests_ / _Inslu Surg Arrests_[PERSON] 2016
* [9][PERSON], [PERSON], [PERSON], [PERSON], [PERSON]. Novel Real-Time System for Traffic Flow Classification and Prediction[J]. _Special Topic_. 2019.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & default & [PERSON] search method & manual adjustment method & [PERSON], [PERSON]. Lightning a city prediction based on IFCM-T-S[J]. _Foreign Electronic Measurement Technology_. 2019.
* [10]max\_depth & 3 & 4 & 5 \\ \hline min\_child\_w eight & 1 & 0 & 1 \\ \hline gamma & 0 & 0.4 & 0.3 \\ \hline scale\_pos\_we light & 1 & 8 & 1 \\ \hline n\_estimators & 100 & 52 & 1000 \\ \hline \end{tabular}
\end{table}
Table 2.Parameter difference table
Figure 8.The result of the average after adjusting parameters
Figure 7.The result of the model of grid search method
|
isprs
|
THUNDERSTORM WEATHER ANALYSIS BASED ON XGBOOST ALGORITHM
|
L. L. Liu, H. C. Liu, C. F. Zhu
|
https://doi.org/10.5194/isprs-archives-xlii-3-w10-261-2020
| 2,020
|
CC-BY
|
isprs/b8d15f98_943e_4ed8_a519_bda48decc5b4.md
|
# Comparing inspire and openstreetMap data: how to make the most out of the two worlds
[PERSON]\({}^{1,}\), [PERSON]\({}^{1}\), [PERSON]\({}^{1}\)
\({}^{1}\) European Commission, Joint Research Centre (JRC), Via [PERSON] 2749, 21027 Ispra, Italy
(marco.minghini, alexander.kotsev, michael.lutz)@ec.eu
###### Abstract
The beginning of our century has seen the rise of Spatial Data Infrastructures (SDIs) and crowdsourced geographic information projects. This study analyses and compares the most relevant initiatives for Europe in both contexts: INSPIRE, the Directive aiming to establish a pan-European SDI used for environmental policies, and OpenStreetMap (OSM), the largest and richest crowdsourced geospatial database. Similarities and differences, advantages and disadvantages of the two initiatives from an end user perspective are presented for a number of characteristics: underlying approach and governance, spatial scope, data structure and encoding, data access, and licensing framework. Overall, both initiatives have developed specific strengths and have achieved different types and degrees of interoperability, which would make their integration highly beneficial to multiple stakeholders. From the pure technical perspective, such integration is fully enabled by the maturity of the available FOSS4G, which offers specific support for both INSPIRE and OSM resources, also reviewed in the paper.
Geospatial Data, INSPIRE, Interoperability, OpenStreetMap, Spatial Data Infrastructures, Volunteered Geographic Information +
Footnote †: [[https://doi.org/10.5194/isprs-archives-XLI-4-W14-167-2019](https://doi.org/10.5194/isprs-archives-XLI-4-W14-167-2019)]([https://doi.org/10.5194/isprs-archives-XLI-4-W14-167-2019](https://doi.org/10.5194/isprs-archives-XLI-4-W14-167-2019)) ][[https://doi.org/10.5194/isprs-archives-XLI-4-W14-167-2019](https://doi.org/10.5194/isprs-archives-XLI-4-W14-167-2019)]([https://doi.org/10.5194/isprs-archives-XLI-4-W14-167-2019](https://doi.org/10.5194/isprs-archives-XLI-4-W14-167-2019)) ]https://doi.
database) is just over 1 million ([[https://osmstats.neis-one.org](https://osmstats.neis-one.org)]([https://osmstats.neis-one.org](https://osmstats.neis-one.org))). The global extent, richness and level of detail of the OSM database have attracted a high academic interest ([PERSON] et al., 2015) as well as an increasing exploitation by a number of actors to build a complex infrastructure of services and applications ([PERSON], [PERSON], 2017). Because of this, OSM can be considered a crowdsourced SDI. The most relevant features of the OSM project are described in more detail in Section 2.
The remainder of the paper is organised as follows: Section 2 represents the core of the work, providing a one-to-one comparison between INSPIRE and OSM on several specific aspects, ranging from the technical to the legal and organisational ones, with the goal of identifying similarities and differences and highlighting pros and cons of the two initiatives. Section 3 provides an overview of the most popular open source software solutions providing specific support for INSPIRE and OSM resources. Section 4 concludes the paper by discussing the outcomes of the INSPIRE-OSM comparison in the broader context of the integration between SDI and VGI data, highlighting the opportunities to make the most out of them, as well as the associated challenges.
## 2 INSPIRE-OSPIREETMAP COMPARISON
Table 1 provides a synthesised comparison between INSPIRE and OSM for a number of characteristics, starting from the managerial and organisational ones, and then diving into more technical aspects about the data produced by the two initiatives. For each analysed characteristic, the similarities and differences are elaborated in further detail in the following subsections.
### Approach
The main difference between INSPIRE and OSM is their underlying approach. Coordinated by the European Commission (EC) and the European Environment Agency (EEA), INSPIRE has been conceived in a top-down direction, since the common Implementing Rules required by the Directive - adopted as Commission Decisions or Regulations and covering the core components of the infrastructure - are legally binding for public authorities in the EU MS. In other words, they must implement the INSPIRE legal requirements by the target dates specified in the roadmap ([[https://inspire.ec.europa.eu/inspire-roadmap](https://inspire.ec.europa.eu/inspire-roadmap)]([https://inspire.ec.europa.eu/inspire-roadmap](https://inspire.ec.europa.eu/inspire-roadmap))). Since the INSPIRE Implementing Rules are EU legislation, their implementation can be enforced and non-compliance might ultimately lead to infringement procedures. However, at the same time, since its birth INSPIRE has been implemented as a highly participatory initiative. In fact, the development of the INSPIRE legal and technical documents and of the maintenance and implementation framework were based on an open and inclusive process, involving experts from the stakeholder community in the MS. MS representatives have also an important role in the INSPIRE governance structure ([[https://inspire.ec.europa.eu/whose-inspire/57734](https://inspire.ec.europa.eu/whose-inspire/57734)]([https://inspire.ec.europa.eu/whose-inspire/57734](https://inspire.ec.europa.eu/whose-inspire/57734))), for example within the Maintenance and Implementation Group (MIG). In addition, experts from the INSPIRE community can discuss implementation issues on the Community Forum ([[https://inspire.ec.europa.eu/forum](https://inspire.ec.europa.eu/forum)]([https://inspire.ec.europa.eu/forum](https://inspire.ec.europa.eu/forum))) as well as through helpless channels dedicated to specific implementation tools, e.g. those for the INSPIRE Geoportal and the INSPIRE Reference Validator ([[https://github.com/inspire-eu-validation/community](https://github.com/inspire-eu-validation/community)]([https://github.com/inspire-eu-validation/community](https://github.com/inspire-eu-validation/community))). These discussions are closely monitored by the INSPIRE technical and political coordinators and sometimes lead to agreed changes to the official legal and/or technical documentation. Finally, the INSPIRE Conference is the annual event gathering the INSPIRE community, formed by MS representatives, data providers and INSPIRE implementers, companies providing technical support for INSPIRE, stakeholders and users, and EC and EEA staff ([[https://inspire.ec.europa.eu/portfolio/inspire-conferences](https://inspire.ec.europa.eu/portfolio/inspire-conferences)]([https://inspire.ec.europa.eu/portfolio/inspire-conferences](https://inspire.ec.europa.eu/portfolio/inspire-conferences))).
OSM has developed in an opposite, bottom-up direction. The very idea of the project, initiated in 2004 by then M.Sc. student [PERSON], was to crowdsource the mapping of the whole world through the contributions of a large number of users, each having local knowledge on a specific area ([PERSON], [PERSON], 2017). As a consequence, over time the OSM database has grown through a fully spontaneous process, largely driven by the enthusiasm of volunteers willing to put their time and effort in creating an openly-licensed product from which everyone can benefit (see Subsection 2.4). OSM is supported, but not controlled, by the OpenStreetMap Foundation (OSMF), a not-for-profit organisation which provides legal support to the project, maintains its server infrastructure, and promotes fund-raising to ensure its sustainability ([[https://wiki.osmfoundation.org/wiki/Main_Page](https://wiki.osmfoundation.org/wiki/Main_Page)]([https://wiki.osmfoundation.org/wiki/Main_Page](https://wiki.osmfoundation.org/wiki/Main_Page))). The OSMF has its own governance structure, mainly composed of a Board and a number of Working Groups supporting OSM in specific areas (licensing issues, vandalism, communication, etc.). Despite the presence of the OSMF, OSM contributors are the only owners of the database. Similarly to the case of INSPIRE, the OSM community also meets annually in a global event named 'State of the Map', which attracts users and developers as well as public administrations, companies and researchers working with OSM data ([[https://wiki.openstreetmap.org/wiki/State_Of_The_Map](https://wiki.openstreetmap.org/wiki/State_Of_The_Map)]([https://wiki.openstreetmap.org/wiki/State_Of_The_Map](https://wiki.openstreetmap.org/wiki/State_Of_The_Map))).
### Spatial scope
The goal of INSPIRE is the creation of a European-wide SDI for the purposes of European environmental policies, and policies or activities which may have an impact on the environment. As such, the Directive addresses 34 so-called spatial data themes relevant for environmental applications, which are listed and defined in the Anneres of the Directive (European Commission, 2007). The themes of Annex I and partly Annex II define a spatial reference framework the remaining themes refer to (see Figure 1). For each theme, the INSPIRE data models (see Subsection 2.3) define and rigorously document on a conceptual level one or more spatial data object types to be used for sharing the data. INSPIRE themes include a total of about 340 spatial object types ([[http://inspire-regadmin.jrc.ec.europa.eu/dataspecification/CatalogueINSPIRE](http://inspire-regadmin.jrc.ec.europa.eu/dataspecification/CatalogueINSPIRE)]([http://inspire-regadmin.jrc.ec.europa.eu/dataspecification/CatalogueINSPIRE](http://inspire-regadmin.jrc.ec.europa.eu/dataspecification/CatalogueINSPIRE))
\begin{table}
\begin{tabular}{|c|c|c|} \hline Characteristic & INSPIRE & OpenStreetMap \\ \hline Approach & top-down & bottom-up \\ Spatial scope & 34 environmental & any spatial object \\ spatial data themes & spatial data themes & (verifiable) \\ Data structure & complex data model, & flat data model, GDAL \\ and encoding & GML encoding & supported formats \\ Data CRS & INSPIRE-specific & WGS84 \\ & CRS & \\ Data access & GGC-compilation & APIs, Planet File, predefined extracts \\ Data license & different, depending & ODbL \\ & on MS data providers & \\ \hline \end{tabular}
\end{table}
Table 1: Synthesised comparison between INSPIRE and OSMObjects.action). In a nutshell, INSPIRE data pertains to very specific geospatial domains (e.g. transportation, statistics, ecology, meteorology, oceanography). While more general and non-environmental geospatial datasets (such as points of interest) are not explicitly included in the spatial scope of INSPIRE, they can still be extracted from multiple specific themes.
Conversely, OSM was started with the goal of producing a database of streets (hence the name 'OpenStreetMap') but did soon evolve into the most diverse geospatial database available. In the open spirit of the project, any object having a physical location on the Earth surface and being verifiable, i.e. provable to be true or false ([[https://wiki.openstreetmap.org/wiki/Verifiability](https://wiki.openstreetmap.org/wiki/Verifiability)]([https://wiki.openstreetmap.org/wiki/Verifiability](https://wiki.openstreetmap.org/wiki/Verifiability))), can be added (at any time and by any contributor) in the OSM database. Consequently, a documented list of all OSM objects has been produced and agreed upon over time by the community. The list is maintained on a dedicated _Map Features_, wiki page ([[https://wiki.openstreetmap.org/wiki/Map.Features](https://wiki.openstreetmap.org/wiki/Map.Features)]([https://wiki.openstreetmap.org/wiki/Map.Features](https://wiki.openstreetmap.org/wiki/Map.Features))), which evolves dynamically as new object types are created. This happens through a collaborative procedure, i.e. the proposal to add a new object type is presented (by properly justifying its need and impact) and the OSM community openly votes for acceptance or rejection. The result is a highly diversified list of several hundreds of object types (including indoor object types) pertaining to almost any geospatial domain. Thus, compared to INSPIRE, the spatial scope of OSM is in general wider, but - as a consequence of the verifiability principle - the database does not include historic events (such as environmental observations) and objects that do not exist anymore in the real world.
### Data structure and encoding
One of the greatest differences between INSPIRE and OSM concerns the way to model and encode data. For each of the 34 spatial data themes, INSPIRE data models have been originally defined through the involvement of a large number of stakeholders from MS as well as domain-specific experts. From an interoperability perspective, the overall goal was to define models that are sufficiently articulated and capture the peculiar characteristics to be used in all European Union MS to refer to the same types of real-world geospatial entities (for example, the many different ways used in MS to define addresses or protected sites).
INSPIRE conceptual models for all spatial data themes are defined using the Unified Modelling Language (UML). These models, accessible online from a common UML repository ([[https://inspire.ece.europa.eu/Data-Models/Data-Specifications/2892](https://inspire.ece.europa.eu/Data-Models/Data-Specifications/2892)]([https://inspire.ece.europa.eu/Data-Models/Data-Specifications/2892](https://inspire.ece.europa.eu/Data-Models/Data-Specifications/2892))), represent the foundation of the INSPIRE Implementing Rules and the corresponding data specification Technical Guidance documents, the latter specifying the technical approaches that MS can adopt in order to satisfy the legal obligations of the Implementing Rules. The INSPIRE UML models have been created based on a number of European use cases in each particular domain. This, together with the specific modelling approach adopted, has resulted into a general sophistication. By way of example, models include complex (i.e. non-simple) attributes and data types, properties with multiplicity greater than 1, and a wide range of available geometry types (including mixed geometries). The INSPIRE Implementing Rules on the interoperability of spatial data sets and services allow the use of any encoding rule which conforms to EN ISO 19118 (International Organization for Standardization, 2011), specifies schema conversion rules for all spatial object types and all attributes and association roles and the output data structure used, and is publicly made available (European Commission, 2010). However, the default encoding rule for all INSPIRE data themes maps INSPIRE UML models into Geography Markup Language (GIM), application schemas (XML schemas). They are made available in the INSPIRE schema repository ([[https://inspire.ece.europa.eu/schemas](https://inspire.ece.europa.eu/schemas)]([https://inspire.ece.europa.eu/schemas](https://inspire.ece.europa.eu/schemas))). Compliance of GML datasets against the requirements of the Technical Guidance documents are tested through the INSPIRE Validator ([[https://inspire.ece.europa.eu/validator](https://inspire.ece.europa.eu/validator)]([https://inspire.ece.europa.eu/validator](https://inspire.ece.europa.eu/validator))). Recently, an alternative rule has been developed by the INSPIRE MIG ([[https://github.com/INSPIRE-MIF/2017.2](https://github.com/INSPIRE-MIF/2017.2)]([https://github.com/INSPIRE-MIF/2017.2](https://github.com/INSPIRE-MIF/2017.2))) proposing an encoding of INSPIRE data that departs from the same UML conceptual model and is based on the GeoJSON standard ([PERSON] et al., 2016).
OSM's conceptual data model of the physical world is simpler. Any OSM object is merely described through the combination of an element (specifying the object geometry) and a list of one or more tags (defining the object attributes) ([PERSON] et al., 2010). OSM elements can be of three types: nodes, used to represent standone point features and defined by a latitude and a longitude; ways, i.e., ordered lists between 2 and 2,000 nodes which represent both linear and polygon features; and relations, i.e. multi-purpose data structures documenting relationships between two or more elements (nodes, ways, and/or other relations) ([[https://wiki.openstreetmap.org/wiki/Elements](https://wiki.openstreetmap.org/wiki/Elements)]([https://wiki.openstreetmap.org/wiki/Elements](https://wiki.openstreetmap.org/wiki/Elements))). Tags, consisting of simple key/value pairs, are associated to each OSM element to describe its properties ([[https://wiki.openstreetmap.org/wiki/Tags](https://wiki.openstreetmap.org/wiki/Tags)]([https://wiki.openstreetmap.org/wiki/Tags](https://wiki.openstreetmap.org/wiki/Tags))). In the open spirit of the OSM project, the _Map Features_ wiki page ([[https://wiki.openstreetmap.org/wiki/Map.Features](https://wiki.openstreetmap.org/wiki/Map.Features)]([https://wiki.openstreetmap.org/wiki/Map.Features](https://wiki.openstreetmap.org/wiki/Map.Features))) as well as all the wiki pages reachable from it, lists the recommended tags agreed by the community, but - in contrast to INSPIRE, where models have been created based on a number of European use cases - OSM contributors are in principle free to define and use their own tags. For example, it may happen that national OSM communities agree to introduce additional tags to describe specific properties of national or local importance. Regardless of this, the fundamental difference when compared to INSPIRE is the flat OSM data structure, which allows to encode OSM data in any available vector format supported by e.g. the Geospatial Data Abstraction Library (GDAL, [[https://gdal.org](https://gdal.org)]([https://gdal.org](https://gdal.org))) without any loss of information. As discussed later in Section 3, this ensures a wide client support for consuming OSM data. However, the original OSM data format (provided e.g. by the OSM API, see Subsection 2.4) is XML-based.
#### Coordinate Reference System
The Coordinate Reference System (CRS) in which INSPIRE and OSM data are
Figure 1: INSPIRE spatial data themes, divided in three Anneres (source: European Commission, 2007)
provided deserves a separate discussion. To ensure interoperability, INSPIRE mandates the use of specific, pan-European CRSs, e.g. using geodetic coordinates based on the ETRS89 or the TFRS datum, or plane coordinates based on the ETRS89 datum and the Lambert Azimuthal Equal Area, Lambert Conformal Conic, or Transverse Mercator projections. Common three-dimensional CRSs (Cartesian and geodetic) are also defined (European Commission, 2010). However, since in many cases the effect of this requirement is that MS have to create, store and maintain data in both their national CRS and one of the INSPIRE-required CRSs, or to use the Download and View Services to provide the required CRSs, the INSPIRE expert group is currently discussing a mechanism that would make it easier to allow additional CRS in order to lower the burden for implementers. In such a case, CRS transformations would need to be implemented using available tools, or libraries such as GDAL and PROJ ([[https://proj.org](https://proj.org)]([https://proj.org](https://proj.org))). OSM data are instead provided in the WGS84 CRS (with no three-dimensional component), the reason being the use of GPS devices to collect street data when the project was originally started. Since OSM editors (based on the OSM API, see Subsection 2.4) only allow contributors to add OSM data in WGS84, this already ensures full CRS compatibility for the whole database.
### Data access
As a world-class example of an SDI, INSPIRE is based on the set of core components mentioned in Section 1. A key role in the infrastructure is played by metadata, based on the established EN ISO 19115 and EN ISO 19119 standards and allowing to find the data published by European Union MS described in Subsections 2.2 and 2.3. Both data and metadata are shared through a Service-Oriented Architecture (SOA) approach, where the so-called INSPIRE Network Services are setup based on OGC standards: 'Discovery Services', to establish access to metadata through the Catalogue Service for the Web (CSW); 'View Services', to provide interactive data visualisations through the Web Map service (WMS) and Web Map Tile Service (WMTS); and 'Download Services', to offer download of raw data through the Atom Syndication Format ([PERSON], [PERSON], 2005), the Web Feature Service (WFS), the Web Coverage Service (WCS), or the Sensor Observation Service (SOS). Similarly to the case of data, the compliance of metadata and services to the INSPIRE Technical Guidance documents is tested through the INSPIRE Validator ([[https://inspire.ec.europa.eu/validator](https://inspire.ec.europa.eu/validator)]([https://inspire.ec.europa.eu/validator](https://inspire.ec.europa.eu/validator))).
Any OGC-compliant client application implementing those standards is thus able to access data and metadata exposed by MS (see also Section 3 in the following). However, another core component of the INSPIRE SDI is the Geoportal ([[http://inspire-geoportal.ec.europa.eu](http://inspire-geoportal.ec.europa.eu)]([http://inspire-geoportal.ec.europa.eu](http://inspire-geoportal.ec.europa.eu))), acting as the main client application of the whole infrastructure and providing a central point of access to the whole set of services from MS organisations. The INSPIRE Geoportal does not store geospatial data. Instead, it exposes data through harvesting the CSW endpoints made available by MS. In addition to data access, for each MS the Geoportal provides statistics on the number of available resources: metadata records, datasets available through View Services, and datasets available through Download Services (see Figure 2, corresponding to the situation as of June 7, 2019). As demonstrated by the difference between the number of metadata records and the number of downloadable datasets (both for the single MS and as a whole), full implementation of INSPIRE has not yet been achieved. This is also proven by the fact that MS datasets published in the Geoportal include both datasets compliant to the INSPIRE data models (see Subsection 2.3) as well as _as-is_ (i.e. non compliant) datasets. Selecting a MS or an INSPIRE theme, the Geoportal allows users to browse available data (through their metadata), view and download them.
Data access happens in a totally different fashion for OSM. Thanks to the relative simplicity of its conceptual model, OSM data can be easily accessed from a variety of sources and in a variety of formats. Metadata catalogues are also not needed, since metadata information is in a large part already included in the tags of OSM objects, and data search/access is only based on tags. The easiest way to download data is through the OSM website ([[https://openstreetmap.org](https://openstreetmap.org)]([https://openstreetmap.org](https://openstreetmap.org))), selecting the _Export_ functionality and defining the bounding box of interest. Application Programming Interfaces (APIs) are also available, which offer programmatic data access to the OSM database. The OSM API ([[https://wiki.openstreetmap.org/wiki/API](https://wiki.openstreetmap.org/wiki/API)]([https://wiki.openstreetmap.org/wiki/API](https://wiki.openstreetmap.org/wiki/API))), used
Figure 2: Availability of metadata, viewable and downloadable datasets on the INSPIRE Geoportal
by OSM editors ([[https://wiki.openstreetmap.org/wiki/Editors](https://wiki.openstreetmap.org/wiki/Editors)]([https://wiki.openstreetmap.org/wiki/Editors](https://wiki.openstreetmap.org/wiki/Editors))), provides read and write access to the database, while the Overspas API ([[https://wiki.openstreetmap.org/wiki/Overpass_API](https://wiki.openstreetmap.org/wiki/Overpass_API)]([https://wiki.openstreetmap.org/wiki/Overpass_API](https://wiki.openstreetmap.org/wiki/Overpass_API))), mostly used from the popular web front- and Overpass Turbo ([[https://overpass-turbo.eu](https://overpass-turbo.eu)]([https://overpass-turbo.eu](https://overpass-turbo.eu))), provides read-only access with customised query capabilities, which makes it ideal for data download. One of the peculiar characteristics of OSM is the availability - together with the database - of its history, which includes the whole set of edits performed on each OSM object and represents an extremely interesting data source for researchers, e.g. to study OSM spatio-temporal evolution ([PERSON] et al., 2018). The obnong platform ([[https://olsome.org](https://olsome.org)]([https://olsome.org](https://olsome.org))) was recently developed to provide API-based access to the OSM history.
Another popular way to access OSM data is through the Planet OSM ([[https://planet.openstreetmap.org](https://planet.openstreetmap.org)]([https://planet.openstreetmap.org](https://planet.openstreetmap.org))), a weekly-updated file including the global OSM database. Along with the Full History Planet OSM, i.e. the version which also includes the whole OSM history ([[https://planet.openstreetmap.org/planet/full-history](https://planet.openstreetmap.org/planet/full-history)]([https://planet.openstreetmap.org/planet/full-history](https://planet.openstreetmap.org/planet/full-history))), it is available in the standard XML format as well as the Protocolbuffer Binary Format (PBF). Finally, a number of companies and organisations offer predefined OSM extracts available for download. These are obtained from a pre-processing of the OSM database to e.g. cover specific areas and/or include only specific objects (buildings, road networks, land use features, etc.) and are offered in multiple formats and CRSs. Examples are the OSM data extracts provided by Geofabrik ([[http://download.geofabrik.de](http://download.geofabrik.de)]([http://download.geofabrik.de](http://download.geofabrik.de))), Interline ([[https://www.interline.io/os/extracts](https://www.interline.io/os/extracts)]([https://www.interline.io/os/extracts](https://www.interline.io/os/extracts))), and the HSR University of Applied Sciences ([[https://osmax.hrs.ch](https://osmax.hrs.ch)]([https://osmax.hrs.ch](https://osmax.hrs.ch))) as well as those that can be dynamically generated through the Humanitarian OpenStreetMap Team (HOT) Export Tool ([[https://export.hotosm.org/en/v3](https://export.hotosm.org/en/v3)]([https://export.hotosm.org/en/v3](https://export.hotosm.org/en/v3))).
#### Data license
Licensing constitutes another fundamental point of difference in the INSPIRE and OSM data sharing approaches. The INSPIRE Directive does not provide any obligation on the license under which MS data shall be made available. As a consequence, the infrastructure has developed in a very heterogeneous way in terms of data licenses. Many datasets are published under open access licenses, while many others are missing license information, or are subject to different, and sometimes restrictive, conditions on their access and use. Both standard licenses (e.g. belonging to the CC BY family) and customised licenses (often provided only in national languages) are specified in the dataset metadata. In addition, INSPIRE allows MS to restrict the view and download of datasets under certain conditions, e.g. if access to those datasets might adversely affect public security or national defence (European Commission, 2007). The result is a heterogeneous picture, which sometimes makes it difficult for end users to understand which legal conditions apply to the use of datasets obtained from combining two or more INSPIRE datasets. In contrast, the whole OSM database is available under a single open access license, the Open Database License (ODbL) (Open Data Commons, 2019). This license allows everyone to freely copy, distribute, transmit and adapt the data, as long as credit is made to OSM and its contributors; also, when altering or building upon the OSM database, the result shall be distributed under the same licence ([[https://www.openstreetmap.org/copyright](https://www.openstreetmap.org/copyright)]([https://www.openstreetmap.org/copyright](https://www.openstreetmap.org/copyright))).
## 3 Foss4G for INSPIRE and OSM
A multitude of software tools can be used in order to search, access, visualise, analyse and process INSPIRE and OSM data. Clearly, considering the fact that both OSM and INSPIRE data are geospatial by nature, there is a certain overlap between the tools for their creation, maintenance and consumption. There are however some noteworthy differences. Firstly, while INSPIRE is supported by both proprietary and open source software solutions, because of its very nature OSM has mostly stimulated the development of new open source software. As a contribution to the conference where this work is presented, the following discussion focuses on the most popular Free and Open Source Software for Geospatial applications (FOSS4G) providing specific support for INSPIRE and OSM resources. In other words, the discussion only concerns software (or parts thereof) which is specifically developed to address the peculiar characteristics of INSPIRE and OSM described in Section 2.
An inventory of tools useful for INSPIRE implementation is available at [[https://inspire-reference.jrc.ec.europa.eu/tools](https://inspire-reference.jrc.ec.europa.eu/tools)]([https://inspire-reference.jrc.ec.europa.eu/tools](https://inspire-reference.jrc.ec.europa.eu/tools)). These tools include both proprietary and open source solutions ranging from desktop/server software, libraries, plugins, online services and other technical products suitable to share and consume INSPIRE data, metadata and services. Only relevant FOSS4G tools are described in the following. Regarding data discovery, specific support for INSPIRE is provided by GeoNetwork opensource ([[https://geoenetwork-opensource.org](https://geoenetwork-opensource.org)]([https://geoenetwork-opensource.org](https://geoenetwork-opensource.org))), used by more than half of the European Union MS to setup their national catalogues; pysow ([[https://pyscw.org](https://pyscw.org)]([https://pyscw.org](https://pyscw.org))), an implementation of CSW written in Python and implementing INSPIRE Discovery Services; and degree ([[https://www.deegree.org](https://www.deegree.org)]([https://www.deegree.org](https://www.deegree.org))), which comes with an INSPIRE workspace to help providing the services required by INSPIRE. Employment of INSPIRE services for data visualisation and download can be achieved through a number of FOSS4G solutions. GeoServer ([[http://geoenetwork.org](http://geoenetwork.org)]([http://geoenetwork.org](http://geoenetwork.org))) provides an INSPIRE extension ([[https://docs.geoserver.org/stable/en/user/extensions/inspire](https://docs.geoserver.org/stable/en/user/extensions/inspire)]([https://docs.geoserver.org/stable/en/user/extensions/inspire](https://docs.geoserver.org/stable/en/user/extensions/inspire))) offering INSPIRE-specific configuration for WMS, WMTS, WFS and WCS capabilities documents. Another GeoServer extension is the application schema support (app-schema, [[https://docs.geoserver.org/maintain/en/user/data/app-schema](https://docs.geoserver.org/maintain/en/user/data/app-schema)]([https://docs.geoserver.org/maintain/en/user/data/app-schema](https://docs.geoserver.org/maintain/en/user/data/app-schema))), which offers WFS support for complex feature types conforming to a GML application schema. MapServer ([[https://mapserver.org](https://mapserver.org)]([https://mapserver.org](https://mapserver.org))) allows also to deploy INSPIRE-compliant View Services ([[https://mapserver.org/ogc/inspire.html](https://mapserver.org/ogc/inspire.html)]([https://mapserver.org/ogc/inspire.html](https://mapserver.org/ogc/inspire.html))) and Download Services ([[https://www.mapserver.org/ogc/inspire.dl.html](https://www.mapserver.org/ogc/inspire.dl.html)]([https://www.mapserver.org/ogc/inspire.dl.html](https://www.mapserver.org/ogc/inspire.dl.html))). Another well-used web geospatial server is degree, which, in addition to CSW, provides also an INSPIRE-compliant implementation of WMS, WMTS, WFS and WCS. Instead, the most successful open source product to serve INSPIRE-compliant spatio-temporal observation data from sensors is 52 deg North SOS ([https://52 north.org/software/software-projects/osos](https://52 north.org/software/software-projects/osos)). INSPIRE services and geopticals can be also created using the Mapbender framework ([[https://www.mapbender.org](https://www.mapbender.org)]([https://www.mapbender.org](https://www.mapbender.org))), which is especially used in Germany, as well as GeoNode ([[http://geoenode.org](http://geoenode.org)]([http://geoenode.org](http://geoenode.org))), a well-known web-based platform used to deploy SDIs which is built with pyscw embedded as default CSW component (with GeoNetwork opensource and deegree configurable as alternate CSW servers) and GeoServer as default OGC web services (OWS) component.
As demonstrated in an ongoing study on INSPIRE client support ([[https://github.com/INSPIRE-MIF/canise](https://github.com/INSPIRE-MIF/canise)]([https://github.com/INSPIRE-MIF/canise](https://github.com/INSPIRE-MIF/canise))), the most popular open source geospatial web clients OpenLayers ([[https://openplayers.org](https://openplayers.org)]([https://openplayers.org](https://openplayers.org))) and Leaflet ([[https://leaflets.com](https://leaflets.com)]([https://leaflets.com](https://leaflets.com))) provide no support for INSPIRE GML data. This is among the reasons resulted into the recent activity of the INSPIREMIG on creating an INSPIRE UML-to-GeoSJSON encoding rule (as mentioned in Subsection 2.3), since the GeoSJSON format is especially suitable for data consumption on the Web. Regarding desktop clients, the FOSS4G tool providing the highest support for INSPIRE GML data is QGIS ([[https://qgis.org](https://qgis.org)]([https://qgis.org](https://qgis.org))). It offers a number of ad hoc plugins to enable the full consumption of INSPIRE data: for QGIS 3+, the most powerful one is 'GML Application Schema Toolbox' ([[https://github.com/BRGM/gml_application.schema_toolbox](https://github.com/BRGM/gml_application.schema_toolbox)]([https://github.com/BRGM/gml_application.schema_toolbox](https://github.com/BRGM/gml_application.schema_toolbox))), explicitly developed to allow manipulating GML application schema datasets in QGIS. Several plugins allow instead to directly query and add in QGIS the INSPIRE datasets published by MS, e.g. the 'INSPIRE Nederland plugin voor QGIS' ([[https://plugins.qis.org/plugins/inspireNL](https://plugins.qis.org/plugins/inspireNL)]([https://plugins.qis.org/plugins/inspireNL](https://plugins.qis.org/plugins/inspireNL))). In addition, GRASS GIS ([[https://grass.osgeo.org](https://grass.osgeo.org)]([https://grass.osgeo.org](https://grass.osgeo.org))) offers a metadata editor to create and edit metadata compliant to the INSPIRE profile. Finally, FTL (Extract, Transform, Load) open source solutions allow INSPIRE data providers to map and transform their native datasets into data that validates against the INSPIRE schemas. This is assisted by the presence of ready-to-use mapping tables for all INSPIRE themes, available at [[https://inspire.ec.europa.eu/Data-Models/Data-Specifications](https://inspire.ec.europa.eu/Data-Models/Data-Specifications)]([https://inspire.ec.europa.eu/Data-Models/Data-Specifications](https://inspire.ec.europa.eu/Data-Models/Data-Specifications)). The _de facto_ standard, which is most used by European data providers, is hate Studio ([[https://www.wetransform.to/products/halestudio](https://www.wetransform.to/products/halestudio)]([https://www.wetransform.to/products/halestudio](https://www.wetransform.to/products/halestudio))); an alternative is GeoKettle ([[http://www.spalidistics.org/projects/geokettle](http://www.spalidistics.org/projects/geokettle)]([http://www.spalidistics.org/projects/geokettle](http://www.spalidistics.org/projects/geokettle))).
In the case of OSM, due to the open-access availability of the database and the programmatic way to access it provided by existing APIs (see Subsection 2.4), the number of available software tools is huge. A comprehensive review of the most popular OSM-based applications for data editing, data download, visualisation, routing and quality assurance was recently compiled ([PERSON], [PERSON], 2017). There is usually no need for FOSS4G to provide specific OSM support, since the flat and simple structure of OSM data (described in Subsection 2.3) allows open source desktop, web-based and mobile tools to natively load, visualise and process them. Customised tools are mostly available for GIS client applications. For example, OSM data can be loaded in QGIS using specific plugins such as \"QuickOSM\" ([[https://plugins.qis.org/plugins/QuickOSM](https://plugins.qis.org/plugins/QuickOSM)]([https://plugins.qis.org/plugins/QuickOSM](https://plugins.qis.org/plugins/QuickOSM))) and \"OSMDownload\" ([[https://plugins.qis.org/plugins/OSMDownload](https://plugins.qis.org/plugins/OSMDownload)]([https://plugins.qis.org/plugins/OSMDownload](https://plugins.qis.org/plugins/OSMDownload))), both based on the Overpass API. Instead, OSM basemaps can be loaded using the plugins \"QuickMap Services\" ([[https://plugins.qis.org/plugins/quick_map.services](https://plugins.qis.org/plugins/quick_map.services)]([https://plugins.qis.org/plugins/quick_map.services](https://plugins.qis.org/plugins/quick_map.services))) or \"Open-Layers\" ([[https://plugins.qis.org/plugins/openplayers_plugins](https://plugins.qis.org/plugins/openplayers_plugins)]([https://plugins.qis.org/plugins/openplayers_plugins](https://plugins.qis.org/plugins/openplayers_plugins))). Ofree Toolbox ([[https://www.orreo-toolbox.org](https://www.orreo-toolbox.org)]([https://www.orreo-toolbox.org](https://www.orreo-toolbox.org))), an open source software dedicated to remote sensing, provides a specific application named 'OSMDownloader' to download OSM data from the main server and use them as reference data to train classification models ([[http://tiny.cc/z0j27y](http://tiny.cc/z0j27y)]([http://tiny.cc/z0j27y](http://tiny.cc/z0j27y))). Similar support for OSM is also provided by the desktop software GRASS GIS, in particular on importing OSM data and correcting the topology ([[https://grasswiki.osgeo.org/wiki/OpenStreetMap](https://grasswiki.osgeo.org/wiki/OpenStreetMap)]([https://grasswiki.osgeo.org/wiki/OpenStreetMap](https://grasswiki.osgeo.org/wiki/OpenStreetMap))), and gvSIG, in particular on loading a number of OSM basemaps ([[https://blog.gvsig.org/2019/02/28/towards-gvsig-2-5-new-wosm-map-servers](https://blog.gvsig.org/2019/02/28/towards-gvsig-2-5-new-wosm-map-servers)]([https://blog.gvsig.org/2019/02/28/towards-gvsig-2-5-new-wosm-map-servers](https://blog.gvsig.org/2019/02/28/towards-gvsig-2-5-new-wosm-map-servers))). Similarly to the desktop case, OSM basemaps can be also embedded in web maps - usually as Tile Map Service (TMS) layers - using OpenLayers and Leaflet. Tiles are retrieved either from the OSM servers or from third-party providers which created their own thematic visualisations. A full list of OSM-based visualisation services is available at [[https://wiki.openstreetmap.org/wiki/List_of_OSM-based_services](https://wiki.openstreetmap.org/wiki/List_of_OSM-based_services)]([https://wiki.openstreetmap.org/wiki/List_of_OSM-based_services](https://wiki.openstreetmap.org/wiki/List_of_OSM-based_services)).
## 4 Discussion and Conclusions
SDI and VGI initiatives have existed for many years and the potential for their convergence was recognised since the very beginning ([PERSON], 2007). Free availability, amount, richness and up-to-dateness have been traditionally considered as key elements for VGI to beneficially integrate or complement authoritative data collected and managed by national mapping agencies ([PERSON] et al., 2017). In the specific case of INSPIRE, a number of efforts have been made to integrate its authoritative, standardised data with VGI, but these usually addressed specific case studies without the attempt to conceptualise an integrated framework ([PERSON], [PERSON], 2014, [PERSON] et al., 2015, [PERSON] et al., 2018). Due to several technical, institutional and legal barriers, this endeavour is still at an early stage and its success calls for different approaches such as the creation of integrated GIS platforms involving a wide network of stakeholders ([PERSON] et al., 2017).
This paper analysed the specific example of OSM, the most mature VGI project herewith considered as a crowdsourced SDI, and its comparison with INSPIRE. There is no doubt that the combination of geospatial information extracted from the two initiatives would be significantly beneficial to several stakeholders: public authorities, professionals, businesses, researchers, humanitarian organisations, and the same INSPIRE and OSM communities in a broad sense. The comparison performed in Section 2 outlined the fundamental underlying differences in the two approaches: the rigorous one adopted by INSPIRE, driven by legal obligations and founded on strict data specifications; and the open one characterizing OSM, driven by the freedom left to its contributors.
Taken separately, each of the two projects has achieved different types and degrees of interoperability at the expense of different drawbacks. INSPIRE has been making an impressive investment in harmonising the way geospatial data is modelled and distributed at the pan-European level, establishing a legal, organisational and technical reference for current and future SDI initiatives. This comes at the cost of an overall heterogeneous and slow implementation by MS due to several reasons, e.g. technical complexity, lack of resources and legal/organisational issues at the MS national level. This means that the full implementation of INSPIRE, and the related immense political and managerial benefits it could bring at the European level, is still to be achieved. In addition, two major issues which might prevent the general usability of the INSPIRE infrastructure are the use of a technologically old architecture to share and access data and the heterogeneity of MS data licenses (see Subsection 2.4). On the other side, being an international project since its beginning, OSM has full license interoperability and is founded on modern technologies (mainly APIs) which facilitate not only accessing data but also building third-party applications on top of them. However, by its very flexible nature OSM suffers from the lack of rigorous data specifications, since contributors are free to use its different from those agreed by the community. In this regard, efforts have been recently made to guide the implementation of VGI projects using data collection protocols ([PERSON] et al., 2017). Intrinsic cons of VGI have also to be considered, e.g. a typically uneven spatial coverage and the lack of quality assurance, although many literature studies have shown OSM to be of comparable or even better quality than authoritative data.
From the pure technical aspect, Section 3 provided an overview of how the available FOSS4G ecosystem provides specific sup port for INSPIRE and OSM data, demonstrating its overall maturity ([PERSON] et al., 2017). Either using OGC web services, APIs or external files, INSPIRE and OSM data can be seamlessly searched and loaded in client applications, processed together to create new content, and also converted to align their data structures. In the latter case, a transformation process is needed in order to either convert OSM data to the INSPIRE schemas, or flatten the INSPIRE models to align them with the simple key/value pair structure of OSM required for imports ([[https://wiki.openstreetmap.org/wiki/Import/Guidelines](https://wiki.openstreetmap.org/wiki/Import/Guidelines)]([https://wiki.openstreetmap.org/wiki/Import/Guidelines](https://wiki.openstreetmap.org/wiki/Import/Guidelines))). Nevertheless, the cases were INSPIRE and OSM data have been used together are isolated for multiple reasons such as licensing requirements, lack of awareness and data security considerations. Still, they provide an interesting setting and facilitate the use of the data by benefiting from the advantages of both OSM and INSPIRE. Thus, combining INSPIRE and OSM data ultimately requires a comprehensive understanding not only of technical aspects, but also of the processes underlying the creation and maintenance of the two infrastructures. Despite INSPIRE and OSM were born for different purposes and aim to achieve different goals, each of them has developed solid and well-recognised good practices the other can benefit from. Hence, to establish a real, sustainable and extensible integration between the two, an additional effort involving all stakeholders (project leaders, implementers, data providers/contributors and end users) constitutes the necessary step.
## Disclaimer
The views expressed are purely those of the authors and may not in any circumstances be regarded as stating an official position of the European Commission.
## References
* [PERSON], [PERSON], [PERSON], and [PERSON] (2017)Free and open source software for geo-spatial applications (FOSS4G) to support future earth. International Journal of Digital Earth10 (4), pp. 386-404. External Links: Document, Link Cited by: SS1.
* [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] (2016)The GeoJSON format. IETF RFC7946. External Links: Link Cited by: SS1.
* [PERSON]. [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] (2019)Establishing common ground through INSPIRE: the legally-driven european spatial data infrastructure. Service-Oriented Mapping63-84. External Links: Link Cited by: SS1.
* [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] (2018)Open data, vgi and citizen observatories in nstfire hackathon. International Journal of Spatial Data Infrastructures Research13, pp. 109-129. External Links: Link Cited by: SS1.
* [PERSON] (1994)Coordinating geographic data acquisition and access: the national spatial data infrastructure. Executive Order 12906, Federal Register59 (71), pp. 17671-674. External Links: Link Cited by: SS1.
* [PERSON]. [PERSON] (2007)Volunteered geographic information and spatial data infrastructures: when do parallel lines converge. Position paper for the vgi specialists Meeting, Santa Barbara, CA, 13-14 December 2007, pp. 13-14. External Links: Link Cited by: SS1.
* [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] (2017)Integrating spatial data infrastructures (sdls) with voluntured geographic information (vgi) for creating a global GIS platform. Mapping and the citizen sensor, Ubiquity Press273-297. External Links: Link Cited by: SS1.
* [2]European Commission (2007)Directive 2007/2/EC of the European Parliament and of the Council of 14 March 2007 establishing an infrastructure for spatial information in the european community (inspIRE). Official Journal of the european union, L0 108/1. External Links: Link Cited by: SS1.
* [3]European Commission (2010)Commission Regulation (EU) no 1089/2010 of 23 November 2010 implementing Directive 2007/2/EC of the european parliament and of the council as regards interoperability of spatial data sets and services. Official Journal of the european union, L3 23/11. External Links: Link Cited by: SS1.
* [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] (2017)Mapping and the citizen sensor. Mapping and the citizen sensor. Ubiquity Press1-12. External Links: Link Cited by: SS1.
* [PERSON] (2007)Citizens as sensors: the world of volunteered geography. GeoJournal69 (4), pp. 211-221. External Links: Link Cited by: SS1.
* Encoding. External Links: Link Cited by: SS1.
* [PERSON], [PERSON], and [PERSON] (2015)Visual analytics of traffic-related open data and vgi. Proceedings of the 5 th International Conference on Information Society and Technology (ICIST 2015), Kopaonik, Serbia, 8-11 March 2015, pp. 13-26. External Links: Link Cited by: SS1.
* [PERSON], [PERSON], [PERSON], and [PERSON] (2015)An introduction to openstreetMap in geographic information science: experiences, research, and applications. OpenStreetMap in GIScience, Springer, pp. 1-15. External Links: Link Cited by: SS1.
* [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] (2017)The relevance of protocols for vgi collection. Mapping and the citizen sensor. Ubiquity Press223-247. External Links: Link Cited by: SS1.
* [PERSON]. [PERSON], [PERSON], [PERSON], and [PERSON] (2018)An open source approach for the intrinsic assessment of the temporal accuracy, up-to-dateness and lineage of openstreetMap. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, XLII-4/W8, pp. 147-154. External Links: Link Cited by: SS1.
* [PERSON] and [PERSON] (2017)A review of openstreetMap data. Mapping and the citizen sensor. Ubiquity Press37-59. External Links: Link Cited by: SS1.
* [PERSON]. [PERSON] and [PERSON] (2005)The atom syndication format. IETF RFC4287. External Links: Link Cited by: SS1.
* [PERSON]. [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] (2017)VGI in national mapping agencies: experiences and recommendations. Mapping and the citizen sensor. Ubiquity Press299-326. External Links: Link Cited by: SS1.
* [PERSON] and [PERSON] (2003) [PERSON], [PERSON], 2003. _Internet GIS: distributed geographic information services for the internet and wireless networks_. John Wiley & Sons.
* [PERSON] (1994) [PERSON], 1994. Interactive information services using World-Wide Web hypertext. _Computer Networks and ISDN Systems_, 27(2), 273-280.
* [PERSON] et al. (2010) [PERSON], [PERSON], [PERSON], 2010. _OpenStreetMap: Using and Enhancing the Free Map of the World_. UIT, Cambridge.
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2016. Crowdsourcing, Citizen Science or Volunteered Geographic Information? The Current State of Crowdsourced Geographic Information. _ISPRS International Journal of Geo-Information_, 5(5), 55. doi.org/10.3390/ijgi050055.
* [PERSON] and [PERSON] (2014) [PERSON], [PERSON], 2014. Linking crowdsourced observations with INSPIRE. _Proceedings of the 17 th AGILE International Conference on Geographic Information Science: Connecting a Digital Europe through Location and Place, Castellu, Spain, 36 June 2014_.
|
isprs
|
COMPARING INSPIRE AND OPENSTREETMAP DATA: HOW TO MAKE THE MOST OUT OF THE TWO WORLDS
|
M. Minghini, A. Kotsev, M. Lutz
|
https://doi.org/10.5194/isprs-archives-xlii-4-w14-167-2019
| 2,019
|
CC-BY
|
isprs/c7355dae_032b_49f5_abc7_a12ef7de8368.md
|
Modeling of urban growth using cellular automata (CA) optimized by Particle Swarm Optimization (PSO)
[PERSON]\({}^{\ast}\)
[PERSON]\({}^{\text{b}}\)
[PERSON]\({}^{\ast}\)
\({}^{\text{a}\ast}\) Department of Surveying Eng., College of Engineering, University of Tehran
{hkhalinia, eng, abaspour} @ut.ac.ir
\({}^{\text{b}}\) Faculty of Geodesy and Geomatics Eng., K.N.T. University of Technology
[EMAIL_ADDRESS]
###### Abstract
In this paper, two satellite images of Tehran, the capital city of Iran, which were taken by TM and ETM\({}^{\ast}\) for years 1988 and 2010 are used as the base information layers to study the changes in urban patterns of this metropolis. The patterns of urban growth for the city of Tehran are extracted in a period of twelve years using cellular automata setting the logistic regression functions as transition functions. Furthermore, the weighting coefficients of parameters affecting the urban growth, i.e. distance from urban centers, distance from rural centers, distance from agricultural centers, and neighborhood effects were selected using PSO. In order to evaluate the results of the prediction, the percent correct match index is calculated. According to the results, by combining optimization techniques with cellular automata model, the urban growth patterns can be predicted with accuracy up to 75%.
Urban Growth, Cellular Automata, Particle Swarm Optimization, Logistic Regression
## 1 Introduction
Due to the high concentration of population in urban areas and rapid growth of industries and services, the areal developments has had a high speed in recent decades in urban areas of Iran. Urban development has often the meaning of urban growth or creating new towns or both of them. Thus, in this regard, the rapid change in the pattern of urban land use within a short period of time can be seen. on the other hand, understanding the mechanisms of urban development is crucial for planning and urban management in order to achieve sustainable urban development. Therefore, many researchers have developed the models for study of urban growth. By using these models, it can be possible to predict different scenarios for urban development before it happens in order to be aware of the cons and pros in the decision-making ([PERSON], 2009).
One of the major methods to study urban growth is using cellular automata (CA). The concept of automated cells, for the first time, was initiated in the field of computer science in 1940 by [PERSON] and [PERSON] ([PERSON], 2003; [PERSON], 2003). Then this concept was developed by [PERSON] as the theory of game of life. The entrance of the CA models into the geography, is the outcome of the [PERSON]'s work in the 1970s at the University of Michigan ([PERSON], 2009). 1990s can be a successful decade in the development of urban CA models. In the past two decades, CA has been used by many researchers in the urban studies (cf. [PERSON] et al., 1997, 1999; [PERSON] and [PERSON], 1998; [PERSON] and [PERSON], 2000, 2001, 2002; [PERSON] et al., 1997; [PERSON] and [PERSON], 1997). CA forms by a regular grid of cells where each cell can be given a certain value according to its location. These values can be changed according to the rules, defined in the model as the transition rules, and the values of neighboring cells in a discrete time intervals ([PERSON], 1984, 2002). Due to the bottom-up approach of this model, it can simulate the overall behavior of the urban growth by considering the behavior of all the urban cells which are affected by the current conditions of the central cell and the neighboring cells. In the cellular automata, the behavior of a complex system, such as an urban system, can be simulated using transition rules. Transition rules calculate the probability of the conversion of a cell as a function of the driving forces of urban growth and position of all neighboring cells ([PERSON] et al., 2011). Spatial parameters of the transition rules have often a severe spatial correlation which can reduce the precision and accuracy of modeling. Nowadays, how to manage these variables and propose solutions to minimize the effects of correlation has become one of the most important research topics ([PERSON] & [PERSON], 2002a; [PERSON] et al., 2008). Extensive researches have been carried out recently in the field of optimizing transition rules using evolutionary algorithms such as genetic algorithm and ant colony algorithm. [PERSON] et al (2010) showed that the proposed model of developing an artificial neural network based on genetic algorithm can better identify the spatial factors and also simplify transition rules in order to optimize them for achieving more accurate simulated model. [PERSON], & [PERSON] (2007) also used a genetic algorithm to optimize the parameters derived from logistic regression.
PSO optimization method is another way of evolutionary computation. But it has some differences with the genetic algorithm, thus in this method, the optimal particles produced in each stage will be allowed to transmit their data to the next generation ([PERSON], 2006; [PERSON] & [PERSON], 1998). Therefore, in this method, unlike the genetic method, the probability of transferring defective particles is almost zero and the accuracy of optimization will significantly increase.
In this paper, the urban growth of Tehran during 1998 to 2010 has been modeled after determination of the effective elements in urban growth and optimizing the coefficients of the regression equations, as transition function in a cellular automata approach, with PSO method. Then the accuracy of the simulated model was evaluated by calculating the error matrix and the parameter of percent correct match (PCM).
## 2 Proposed Methodology
In recent years, various researchers have shown that CA is an appropriate tool for modeling urban change and they have also used it in a variety of urban phenomenon such as traffic simulation, land use change, land cover change, and urban growth. The main elements of the cellular automata methods include cellular networks, cell status, neighborhoods, time, and transition rules.
The main focus of this article is to build a new model in order to predict the urban growth of the city of Tehran. To achieve this aim, a homogeneous cellular network with two-states, as urban and non-urban, over two classified satellite images with a period of 10-years difference is used. The Logistic regression is used as the transition function to assess the urban growth. This function has the ability to model the external effects and nonlinear interactions between the relevant variables for its nonlinear nature.
Because there are high levels of dependencies between variables in urban growth while it is almost impossible to remove these dependencies, researchers have to use only some of the factors affecting the urban growth with less dependency. Table 1, shows the factors considered in the modeling of urban growth in Tehran.
The probability for a cell in the position of (_i,j_) in the grid to convert from non-urban to urban state can be calculated from the following equation ([PERSON], 1993; [PERSON]):
\[P_{q}^{i}=\left(p_{i}\right)_{ij}\times\left(p_{\Omega}\right)_{ij}\times \left.con\left(\right)\times p, \tag{1}\]
where \(\left(p\right)_{ij}\) shows the local value of the cell for conversion from non-urban to urban and \(\left(p_{\Omega}\right)_{ij}\) shows the value of the cell for conversion from non-urban to urban due to its neighbourhood cells. Due to the limiting condition of a cell, the variable, \(con\left(.\right)\), can be 0 or 1. In this research, placing the cell on the main roads and highways is considered as a constraint through the analysis process. Factor \(p_{r}\) is used in order to model the effect of random errors. There are some parameters for each of the mentioned factors. These coefficients in the equations were optimized using PSO in this paper.
PSO is one of the most basic methods in Swarm-based optimization meta-heuristics. The fundamental distinction of swarm-based with other methods is inspiration by nature and colony living of creatures. Other optimization techniques like GA do not consider any roles for members (Individuals), while the swarm-based methods consider memory for its members (particles). There is a fundamental concept in the Swarm named Stigmergy, which means that a way to convey information to others (other Particles) indirectly. In fact, there is another infrastructure except individual, and no particle transfers the information directly.
In this method, each particle tries to improve its current position using the following information:
* Current status
* Current speed
* The gap between the current status and \(P_{best}\)
* The gap between the current status and \(G_{best}\)
where \(P_{best}\) is the best position which each particle has achieved and \(G_{best}\) is the best position which has been obtained by the entire particles.
Equation (2) presents the fitness function used in this study.
\[F\left(a\right)=\sum_{i=1}^{N}\sum_{j=1}^{M}\left(p_{q}\left(a\right)-f_{q}^{ 0}\right)^{2} \tag{2}\]
where M\(\cdot\)N is the total number of urban cells, \(P_{ij}(a)\) is the probability of the urbanization of each cell (equation (1)) and \(f_{q}^{0}\) represents the observed condition of each cell that can take the values 0 or 1 for urbanization or non-urbanization state of the cell in the target year, respectively.
After the optimization of the coefficients in the logistic regression equations by the proposed method, the error matrix of the simulated images were calculated and according to the results of the error matrix, the parameter of Percent Correct Match (PCM) was calculated using equation (3) ([PERSON] and [PERSON]).
\[PCM=\frac{A+D}{A+B+C+D} \tag{3}\]
## 3 Evaluation and Results
### Dataset for the study area
In this research the process of urban growth is modeled for the city of Tehran. Tehran is the largest city and capital of Iran according to statistics provided by the Statistical Center of Iran, the city of Tehran, with an area of about 730 square kilometers and a population of about 8.5 million, is the twenty-fifth most populous city in the world. The city of Tehran, is located in the southern part of Alborz mountains between east longitude from \(51^{\circ}2\) to \(51^{\circ}36^{\circ}\) with a length of approximately 50 km and north latitude from \(35^{\circ}3^{\circ}4\) to \(35^{\circ}50\) with a length of approximately 30 km.
In this study, the two satellite images of land use of Tehran in the years 1998 and 2010 are used (figure 2). The obtained images from Landsat satellite were classified by using a supervised method in the ENVI software by the introduction of 10 learning areas for each land use. Table 2 shows the characteristics of the images:
\begin{tabular}{l l l l l l} \hline \hline
**Variables** & \begin{tabular}{l} **Type of** \\ **variables** \\ \end{tabular} & \begin{tabular}{l} **Definition of** \\ **variables** \\ \end{tabular} & \
\begin{tabular}{l} **Range of** \\ **variables** \\ \end{tabular} \\ \hline \hline \(D_{urban}\) & Spatial &
\begin{tabular}{l} Distance of each \\ cell from urban \\ cells \\ \end{tabular} & 0\(\leq\)\(D_{norm}\)\(\leq\)1 \\ \hline \(D_{Main\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\
### Implementation and Results
In the first step of research, the parameter of the local probability was obtained and spatial variables are defined. Then, the parameters P\({}_{t}\) was calculated in order to adjusting Random errors. In this regard, with randomly generation of a set of weighting coefficients (\(a_{p}\) and \(a_{0}\)) and calculating the fitness function of each one, the optimization process of the weighting coefficients with PSO method was started.
In the next step, by using the generated optimal coefficients, the probability of urbanization of each pixel was calculated via the equation (1). Finally, by defining a threshold value for the probability, the simulated image of the urban growth was generated. Figure (3) show the realistic and simulated raster of the urban growth in the year 2010.
In order to evaluate each of the anticipated patterns, PCM index is calculated. The results of calculation of this index show that by expanding the defined scope as neighborhood and increasing the influence of adjacent pixels on each pixel subsequently, the accuracy of the prediction model will increase.
## 4 Conclusion
In this paper, two satellite images of Tehran, which were taken by TM and ETM' for years 1988 and 2010 are used as the base information layers to study the changes in urban patterns of this metropolis. The patterns of urban growth for the city of Tehran are extracted in a period of twelve years using cellular automata setting the logistic regression functions as transition functions. Furthermore, the weighting coefficients of parameters affecting the urban growth were selected using PSO. In order to evaluate the results of the prediction, the percent correct match index is calculated. According to the results, by combining optimization techniques with cellular automata model, the urban growth patterns can be predicted with accuracy up to 75%. It is also concluded that by increasing the width of the neighborhood influence, the results of the forecast can be closer to reality.
Therefore, production of extensive maps of the neighborhood using spatial indicators such as enrichment factor could contribute significantly to improving the accuracy of forecasting.
## References
* [1]
* [2] [PERSON], [PERSON] (2009), _\"Cellular automaton, a novel method for simulation of urban growth.\"_ Journal of Technology Research. Vol.4, No.4, PP. 9.
* [3] [PERSON] (2003). _\"Transition Rule Elicitation for Urban Cellular Automata models_ (Case Study:Wuhan-China).\" MSc.thesis, Internationalinstitute for geo-information science and earthobservation (ITC).The Netherlands.
* [4]
* [5]
* [6] [PERSON], [PERSON], [PERSON] (2009) _\"Remote sensing(RS),Geo spatial information system(GIS) and Cellular automata(CA) as a tool for simulating urban land use
Figure 3: Real land growth of Tehran in 2010 (a) and Simulated land growth of Tehran for 2010 (b)
Figure 2: Two classified images used in the study, image obtained in 1998 (a) and in 2010 (b)change_ (case study: city of shahrekord).\" Environmental Science. seventh year, No.1, PP. 16.
* [PERSON], [PERSON], [PERSON] (1999). _\"Modeling urban dynamics through GIS-based cellular automata.\" Computers, environment and urban systems,_ PP.205-233.
* [PERSON], [PERSON], [PERSON] (1997). _\"A self-modifying cellular automaton model of historical urbanization in the San Francisco Bay area.\" Environment and Planning B._ PP.247-262.
* [PERSON], [PERSON] (1993). \"Cellular automata and fractal urban form: A cellular modeling approach to the evolution of urban land-use patterns.\" Environment and Planning A. PP.1175-1199.
* [PERSON], [PERSON]. (2002). _\"Neural-network based cellular automata for simulating multiple land use changes using GIS.\"_ International Journal of Geographical Information Science.PP. 323-343.
* [PERSON], [PERSON] (1998). _\"Simulation of land development through the integration of cellular automata and multi-criteria evaluation.\"_ Environment and Planning B-Planning & Design. PP. 103-126.
* [PERSON] (1984). _\"Cellular automata as models of complexity.\" Nature._PP. 419-424.
* [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], s.(2011). _\"Modeling dynamic urban growth using cellular automata and particle swarm optimization rules.\"_ Landscape and Urban Planning. PP. 188-196
* [PERSON], [PERSON], [PERSON] (2008). _\"Fuzzy inference guided cellular automata urban growth modelling using multi-temporal satellite images.\"_ International Journal of Geographical Information Science.PP.1271-1293
* [PERSON], [PERSON], [PERSON], [PERSON] (2007). _\"Genetic algorithms for determining the parameters of cellular automata in urban simulation.\"_ Science in China Series D: Earth Sciences. PP. 1857-1866.
* [PERSON] (2006). _\"Fundamentals of computational swarm intelligence.\"_ Hoboken:Wiley.
* [PERSON], [PERSON] (1998). _\"Comparison between genetic algorithms and particle swarm optimization.\"_ Lecture Notes in Computer Science.PP. 611-616.
* [PERSON], [PERSON] (1993). _\"Cellular automata and fractal urban form: A cellular modeling approach to the evolution of urban land-use patterns.\"_ Environment and Planning A. PP. 1175-1199.
* [PERSON] (2002). _\"Calibration of stochastic cellular automata: The application to rural-urban land conversions.\"_ International Journal of Geographical Information Science. PP. 795-818.
* [PERSON], [PERSON] (2001). _\"Land-cover change model validation by an ROC method for the Ipswich watershed, Massachusetts, USA.\"_ Agriculture, Ecosystems & Environment. PP.239-248.
* [PERSON] (2003). _\"Multi-method assessment of map similarity.\"_ International Journal of Geographical Information Science. PP. 235-249.
* [PERSON], [PERSON], [PERSON], [PERSON] (2005). \"Calibrating a neural network-based urban change model for two metropolitan areas of the Upper Midwest of the United States.\" _International Journal of Geographical Information Science._ PP.197-215.
|
isprs
|
Modeling of urban growth using cellular automata (CA) optimized by Particle Swarm Optimization (PSO)
|
M. H. Khalilnia, T. Ghaemirad, R. A. Abbaspour
|
https://doi.org/10.5194/isprsarchives-xl-1-w3-231-2013
| 2,013
|
CC-BY
|
isprs/7a5e55c3_a414_4f00_b210_c0babbc8d28c.md
|
# Quality Analysis and Improvement of Fundamental Geographic National Conditions Monitoring Results
[PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON]
1 National Quality Inspection and Testing Center for Surveying and Mapping Products, Beijing, China - [EMAIL_ADDRESS]
###### Abstract
With the integration and unification of natural resources, the application fields and application levels of geographic national condition monitoring results have also expanded and improved, and the data quality requirements for national condition monitoring have become higher and higher, especially the results of fundamental geographic national conditions monitoring, the quality of the results is the basis of monitoring and the life of monitoring. Based on the recent practice in fundamental geographic national conditions monitoring, this paper summarizes the quality requirements of monitoring results, analyzes typical quality problems and quality influencing factors, and three measures for improving the quality of the results are proposed: one is a problem-oriented quality inspection method; the other is the key points of quality control in the fundamental national condition monitoring process; the third is the preliminary exploration of the application of big data in monitoring.
Fundamental Geographical National Monitoring, Quality Requirements, Problem-oriented, Quality Control, Big Data +
Footnote †: Corresponding author
## 1 Introduction
Geography national conditions are important basic national conditions. Doing a good job of dynamic monitoring of major national conditions and national strength is the basic work to comprehensively understand national conditions, grasp national conditions, and formulate national policies ([PERSON], 2018). In particular, with the integration and unification of natural resource supervision, the application field of geographic national conditions monitoring results has expanded and the application level has also increased([PERSON], 2018). The geographic national conditions monitoring results will continue to play a fundamental role in supporting various types of natural resource surveys Function, and the quality level of its results will directly affect the accuracy and reliability of natural resources survey and monitoring of various statistics and analysis results([PERSON], 2019). The natural resources survey and monitoring business puts forward higher requirements on the results of geographic national conditions monitoring. Therefore, the research on the method of improving the quality of basic geographic national conditions monitoring results is its important guarantee. This paper analyzes and summarizes the quality requirements of the basic geographical and national conditions monitoring results under the unified pattern of natural resource supervision and integration, and analyzes the quality influencing factors in combination with the frequent and critical typical problems in actual production, and proposes methods and measures for quality improvement.
## 2 Quality Requirements
The results of geographic national condition monitoring mainly use the rich spectrum and texture and time phase features contained in multi-source aerospace remote sensing images([PERSON], [PERSON], 2013), combined with various reference materials and knowledge, using interactive interpretation, field survey and other methods to extract the surface coverage and change information, using data sets, metadata, and statistics for data expression, with quality characteristics such as mathematical foundation, data accuracy and up-to-date. Due to the vast territory of our country, the natural and human environments vary greatly from place to place, and satellite remote sensing image sources often have large differences in sensor multi-sources, which has an adverse effect on the collection of surface change information. And software factors, it is inevitable that there are various quality problems in the results of geographic national conditions monitoring data. Among them, time accuracy, position accuracy, classification accuracy and attribute accuracy seriously affect the data quality of the monitoring results. In the process of integration and fusion of natural resources-oriented big data, there are risks that affect the application of statistical analysis by government departments and the accuracy of decision-making.
### Time Accuracy
The results of geographic national conditions monitoring data require that the original data sources, thematic materials, and results data meet the requirements of the time([PERSON], Y.L., 2017). Generally use data verification and analysis methods for inspection. the method of data verification and analysis is used for the content of verification includes the use of temporal information of image data sources, the current status of thematic data in various industries, the timeliness of field surveys and the time point of final results.
### Position Accuracy
The mathematical accuracy of the monitoring results is mainly reflected in the correspondence between the vector position andthe features on the image. The contents of the inspection include geometric displacement and edge connection. In general, it is required that on the basis of qualified images, the position accuracy of the surface coverage classification boundaries with obvious boundaries on the image, the boundaries of element and the position accuracy of location point points should be controlled within 5 pixels. Under special circumstances, such as shielding and shading of high-rise buildings, the position accuracy should be controlled within 10 pixels in principle. After the edge is connected, the graphics data should be smooth and continuous, avoid hard folds and sharp corners, and ensure that the edges are within 0.01 meters of the boundary. In addition, the topological consistency between the layers of the monitoring results is required.
### Classification Accuracy
The contents of the classification accuracy check of the monitoring results of geographical conditions mainly include the correctness of classification and the completeness of coverage. First of all, it is required that the data range should completely cover the scope of the monitoring task area. Secondly, in the transition zone without obvious boundaries, the surface coverage classification data should at least ensure that they meet the classification requirements of the previous level. Objects with obvious boundaries on the image should be correctly classified in strict accordance with the classification requirements.
### Attribute Accuracy
The attribute accuracy check of the geographical national conditions monitoring results mainly includes the completeness and correctness of the general attribute items and the proprietary attribute items. The range of the value is required to meet the requirement, the attributes of the change type are objective and correct, and the attributes of the update object are consistent with the attributes of the DOM, field survey results, thematic data and other reference materials.
## 3 Typical Problem Analysis
### Data Source Usage Problems
Incorrect use of the data source will cause quality problems in terms of time accuracy, position accuracy, classification accuracy, and attribute accuracy. The collection of data sources mainly includes image data and thematic data of professional departments. The state distributes the images several times in batches, while allowing local governments to cooperate in image collection. In the collection of thematic data, in addition to the national basic data, there are also thematic data collected by the relevant departments of water conservancy, forestry, land, transportation, civil affairs, environment and so on by the provinces themselves. In the process of using the data source, it is necessary to comprehensively consider the requirements of image resolution and shooting time, and it is necessary to make full use of the current and well-regulated data of authoritative professional departments. The following problems are common in the use of data sources: 1 Multi-source images are covered in the same important area, and high-resolution images are not used for data production, resulting in excessive collection accuracy. 2 The same area has different time-phase images of the same sensor, The April-June image or the latest image is not used for production, and the production uses the earlier image, resulting in missing updates. 3 Other kinds of basic surveying and mapping results and the results of thematic data of water conservancy, forestry, land, transportation, civil affairs, environment and other related departments are not used correctly or uniformly, resulting in the inconsistency between the attributes of factors and entities.
### Position Update Problems
#### 3.2.1 Position Accuracy Exceeds the Limit
The problem of excessive accuracy of position is mainly reflected in the use of images. For example, in urban areas with supplementary high-resolution commercial satellite images as the main image source for production, low-resolution images were used for range acquisition in the early stage, during the monitoring period, boundary adjustment was required based on high-resolution image, but the operators did not follow the requirements, it causes the position reference monitoring image to exceed the error limit. There is also a case where there is a significant change compared with the two phase images, the object changes with expansion, and the boundary range is not updated according to the images of the new period, resulting in the inconsistency between the boundary range of the object and the image, the category and scope of ground objects have been changed, only the object code is modified according to the images of the new period, but the scope is not modified, resulting in the position accuracy does not meet the requirements. Figure 1 reflects the problem that the acquisition accuracy of the new construction site exceeds the limit, only the category code is modified without modifying boundary according to the image, which leads to the boundary exceeds the limit and is not inconsistent with the image.
#### 3.2.2 Object Missing Update
In the production process, due to insufficient verification of the technical route and inconsistent understanding of the updated standards, the objects are missing to update. Non-standard production only use the background vector data and the monitoring period image to collect the change information in the identified change area, the risk of this operation mode is that the operator thinks that the two periods image does not change, and ignore errors that miss updates in earlier version data resulting in missed updates to this changed area. The ground features have the same spectral characteristics and texture characteristics, and meet the updated area requirements, due to different operators' inability to understand the definition, classification and collection norm of certain features, the update scale is inconsistent, resulting in objects are not updated. In the process of checking the results of the national conditions monitoring data, the investigation of the large area of misclassification of ground objects is the key step. Due to the omission and update error of ground objects, the statistical analysis and use of data volume on monitoring data change will be directly affected. Figure 2 reflects that the image contrast of the two phases has changed significantly, and
Figure 1: Position accuracy exceeded
the ground object has changed from the farmland to the building area, and the data has not been updated.
#### 3.2.3 Constraint Relation Problems
The completion of a unit of results may have more than one operator to complete, which is not to do the change information linkage update hidden trouble. The location of the national conditions elements was not modified when modifying the surface coverage boundary, or the feature objects of other layers were not simultaneously modified when modifying the national condition feature objects, resulting in an incorrect constraint relationship between the objects. Figure 3 reflects the problem that the pavement range changes in the surface coverage, but the roads in the data set of national conditions are not updated. Figure 4 reflects that the water surface range in the surface coverage has been modified, but the high water boundary of the data set of national conditions has not been modified synchronously.
### Attribute Value Problems
#### 3.3.1 Enumeration Value Error
There are some enumerated fields in the monitoring data, such as change type, GB, featID, etc., in order to improve production efficiency, some attribute items will be automatically assigned by software in order to improve production efficiency. However, production software has insufficient testing and verification, and it is inevitable that there are some attribute assignment errors caused by human and software defects. The field of change type reflects the change in the results of geographical national conditions monitoring data, the value should be assigned as 1, 2, 4, -2, -3 or 9 according to the specific situation, it is required that the element unique identification code of the new element should be empty or the default value, only objects whose scope has changed require that the element unique identifier be used before. In the production process, there are attribute errors caused by personal incomprehension, and there are also value errors caused by the software being put into use without testing and verification. Figure 5 shows that the problem is the change type attribute value error, the water area has changed, and the change type field is assigned a value of 2.
#### 3.3.2 Proprietary attribute Value Error
There are three problems in production: The first, it does not make full use of the information of thematic data, and the attribute of the thematic data is not filled in. The second, it fully relies on the thematic data, does not analyze, sort out and check the thematic data, and does not update the obvious errors, resulting in the inconsistency between the attributes of the object and the thematic data. The third, no proprietary attribute values were filled in according to field survey results. Figure 6 reflects that according to the interpretation of sample photo, the number of newly added road lanes and road width attributes are inconsistent with the photo. Figure 7 shows that the new forest farm did not assign an area field value based on the thematic data.
#### 3.3.3 Classification Error
Large areas update errors will directly lead to unqualified data results, reduce data credibility, and seriously affect statistics, analysis and decision-making. The classification error is mainly caused by two reasons: The first is due to human factors such as cutting or merging misoperation, and erroneous updates to features that have not changed types, such as large areas of farmland updated to houses and roads. The second is that the image does not support the type, and the classification code is not updated according to the field survey results, and the manual interpretation produces a large area of errors. Figure 8 shows that the problem is that the farmland is wrongly represented as the building. The problem in figure 9 is that the object feature is represented as road, while the field photo is railway.
## 4 Quality Improvement Measures
As the basic data of monitoring, the integrity of image data and thematic data collection and the accuracy of analysis and utilization directly affect the quality of monitoring data.For data production, the main factors affecting its quality include five aspects: personnel, equipment, data, technology and quality control system. The quality improvement of the results of basic geographical condition monitoring should be carried out in these five aspects.
### Problem-oriented Inspection Method
The quality control of the data results of geographic national conditions monitoring should adhere to the problem-oriented, shift the inspection focus from the data results to the five aspects of personnel, equipment, data, technology, and quality management system, grasp the key issues, and start with the problems. In the process of inspection, focus on the key links and important factors that seriously affect the quality of the results, including the defects produced in the process where the operator plays a leading role, the reliability of the software used in the automation process, the inadequate data collection and utilization, the deviation of the technical route and method, the poor management and so on. From the practice of monitoring geographical conditions for many years, the most vulnerable link is production. The error-prone problems should be summarized and sorted out, and the quality problems should be analyzed in detail, especially those affecting the application and statistical analysis of the results. According to the result data collection standard, put forward the correct operation method to eliminate the influence of human factors on the data quality problem. In this way, we can accumulate rich operation experience for normal monitoring data production and gradually improve the data quality.
### Emphasis on Process Quality Control
On the basis of the traditional quality inspection concept of 'two inspections and one acceptance' of surveying and mapping results, the quality control of geographical national condition monitoring shall be strengthened through the hierarchical quality management from the operation unit to the undertaking unit and then to the national level, move the focus of quality control from results control to process control, strengthen the technical training, superwise and correct technical deviation, timely found that affect the monitoring results of universality, orientation problem, unified technical standards, to ensure data quality conform to the requirements of the technical design. Process quality control is the first and the most important task to ensure the quality management of monitoring results. Process quality control is the most important part of quality management, which is related to the success or failure of quality management. Process quality control will run through the entire production process, including all departments related to production and quality. The effectiveness of the monitoring results can only be guaranteed if the operation department have done a good job in each link of quality training, thematic data, technical design, first-piece result verification, change information collection, process inspection, field investigation, editing and sorting, and final inspection. The accuracy of the data is the lifeline of the monitoring, which is related to the final success or failure of the monitoring of geographical conditions. By carrying out whole-process quality control, the authenticity and reliability of data results can be fundamentally guaranteed, potential risks in the implementation process can be avoided, quality problems can be traceable, the quality of monitoring results can be guaranteed to meet the requirements, and data services can be provided to the country.According to the practice of monitoring the basic geographical conditions for many years, it is the production link that is most likely to cause problems. The error-prone problems should be summarized and sorted out, and the quality problems should be analyzed in detail, especially the problems affecting the application and statistical analysis of the results.
### Geographical Conditions + Big Data
With the in-depth development of the investigation and monitoring work in various industries, the scale of various image sources, thematic data and various data products acquired over the years keeps expanding ([PERSON], [PERSON], 2019). From the perspective of data volume, growth rate, accuracy and application value, the monitoring data has entered the era of big data ([PERSON], 2016). Quality inspection department should be take advantage of the shared data resources to carry out the quality control method of scientific and technological innovation. Cloud computing is an innovative technology architecture based on the key public information infrastructure of cloud technology, cloud resources, cloud management and cloud services. It provides overall services of information infrastructure by standardizing a large number of service resources such as computing, storage, broadband and software.
With the technical support of Internet of things and cloud model, we will explore the integration and sharing of big data of geographical conditions. The use of big data, accurate analysis, to explore the big data environment of fundamental geographical conditions monitoring results test of new ideas. Build a quality control big data support database that can integrate the data from multiple sources. This database is a data pool for users to upload and download, it contains interpretation samples, expert knowledge, and massive change information model extracted by deep learning of big data. Use it to assist quality control, so as to improve quality control efficiency and achieve scientific, impartial, comprehensive and objective.
## 5 Conclusion
Fundamental geographic national conditions monitoring is a regular activity, which has provided a large number of reliable data for China's economic and social development. The achievements have been applied in the integration of urban planning, environmental protection and and space use control. The age of big data has arrived, with the integration and unification of natural resources business, various business work will put forward more and higher requirements on geographic information products. It is an important task to analyze the current situation of the quality of the monitoring results and take effective measures to improve the quality. The quality
Figure 8: Farmland code error Figure 9: Road code errorimprovement method will focus on the development of geographical and national conditions monitoring technology level, diversified results, Internet + big data, etc.
## References
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2018: Brief Introduction to the Contents and Methods for Quality Control of Outputs from Geographical Conditions Monitoring. Standardization of Surveying and Mapping 34(3), 22-24.
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2017: Innovation in the Census and Monitoring of Geographical National Conditions. Journal of Wuhan University 43(1), 2-3.
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], L.J., 2019: National Geographical Conditions Statistical Analysis in the Era of Big Data. Journal of Wuhan University 44(1), 68-69.
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2017: Key Points of Data Updating during National Fundamental Geographic Conditions Monitoring. Standardization of Surveying and Mapping 33(2), 14-15.
* [PERSON] et al. (2013) [PERSON], [PERSON] et al., 2013: Mdthods and Technologies of National Geographic State Monitoring. Science Press, 23-32.
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2019: On the Application of Geographic Conditions Monitoring Results in Natural Resources Ecological Quality Evaluation System. Standardization of Surveying and Mapping 35(4), 16-17.
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2016: Research Framework of Geographical Conditions and Big Data. Journal of Remote Sensing 20(5),1017-1021.
|
isprs
|
QUALITY ANALYSIS AND IMPROVEMENT OF FUNDAMENTAL GEOGRAPHIC NATIONAL CONDITIONS MONITORING RESULTS
|
M. Li, H. P. Chen, Z. B. Tian, B. Qiu, W. J. Xie, Y. H. Chen
|
https://doi.org/10.5194/isprs-archives-xliii-b3-2020-1353-2020
| 2,020
|
CC-BY
|
isprs/13ef3089_9cda_428d_bdcd_7268f2726651.md
|
# Terrestrial Laser Scanning and Satellite Data in Cultural Heritage Building Documentation
###### Abstract
Technological advances in the field of information acquisition have led to the development of various techniques regarding building documentation. Among the proposed methods, acquisition of data without being in direct physical contact with the features under investigation could provide valuable information especially in the case of buildings or areas presenting a high cultural value. Satellite or ground-based remote sensing techniques could contribute to the protection, conservation and restoration of cultural heritage buildings, as well as in the interpretation and monitoring of their surrounding area. The increasing interest in the generation of 3D facade models for documentation of the built environment has made laser scanning a valuable tool for 3D data collection. Through the generation of dense 3D point clouds, digitization of building facades could be achieved, offering data that could be used for further processing. Satellite imagery could also contribute to this direction, extending the monitoring possibilities of the buildings' surrounding area or even providing information regarding change detection in large-scale cultural landscapes. This paper presents the study of a mason house built in the middle of the 18 th century in northwestern Greece, using terrestrial laser scanning techniques for facade documentation, as well as satellite imagery for monitoring and interpretation purposes. The scanning process included multiple external scans of the main facade of the building which were registered using artificial targets in order to form a single colored 3D model. Further process resulted in a model that offers measurement possibilities valuable to future plans and designs for preservation and restoration. Digital processing of satellite imagery provided the extraction of additional enhanced data regarding the physiogonomy of the surrounding area.
Remote Sensing, Terrestrial Laser Scanning, Satellite Imagery, Building Documentation, Cultural Heritage. 28 th CIPA Symposium \"Great Learning & Digital Emotion 28 August-1 September 2021, Beijing, China 2021
[PERSON] University of Thessaloniki, Faculty of Engineering, School of Civil Engineering,
Lab. of Photogrammetry - Remote Sensing, Thessaloniki, Greece - [EMAIL_ADDRESS]
## 1 Introduction
Building documentation is a powerful tool that could contribute to a variety of engineering issues, including among others urban planning, identification and interpretation of buildings, monitoring and management of sites and cultural landscapes, as well as protection, conservation and restoration of cultural heritage ([PERSON] et al., 2014; [PERSON], 2011; [PERSON] et al., 2012).
Technological advances regarding information acquisition of buildings presenting a cultural value have led to the development of various techniques such as topographic, photogrammetric and ground-based remote sensing techniques or even their combination ([PERSON], 2006; [PERSON] et al., 2013). Especially remote sensing techniques may contribute to monitoring and interpretation of the site, while a combination of other conventional techniques (topographic surveys and measurements, photogrammetric methodologies) could also be used in structural monitoring ([PERSON] et al., 2010).
Data collection for the documentation of the built environment in three dimensions (3D), as well as the increasing demands in the generation of facade models, have turned the research interest to laser scanning, which could be considered an efficient method regarding the digitization of building facades through the generation of dense 3D point clouds.
The speed of acquiring high-density data and the automated processing which offers high accuracy have led to the wider use of terrestrial laser scanners (TLS) in building documentation ([PERSON] and [PERSON], 2007; [PERSON], 2005). Satellite imagery could also provide complementary information, yet necessary in terms of monitoring possibilities of the surrounding area or change detection in cultural landscapes ([PERSON], 2019).
This paper presents the study of a manison house of cultural significance, built in the middle of the 18 th century in northwestern Greece, using terrestrial laser scanning techniques for facade documentation. Satellite images were also exploited for monitoring and interpretation purposes. The scanning process included multiple external scans of the main facade of the building which were registered using artificial targets in order to form a single colored 3D model. Further process resulted in a model which could offer measurement possibilities valuable to future plans and designs for preservation and restoration. Digital processing of the satellite imagery provided the extraction of additional enhanced data regarding the physiogonomy of the surrounding area.
## 2 Study Area
The study area is located in medium-high altitude (630m) in the north western part of Greece, region of Western Macedonia and it concerns a preserved manison in Kastoria city and its surroundings (Figure 1). The city is located on a peninsula at the western shore of Lake Orestiada and it has a long cultural history, which is depicted in the older parts of the city.
Apart from its high architectural and urban value, it also presents historical, geographical and archaeological value through its landscape diversity ([PERSON], 2012). Rich elements of Byzantine culture (Byzantine Justinian walls and medieval churches) as well as traditional manisons of the 17 th and 18 th centuries, unique for their architectural design are located in the city center (Doltso and Apozari traditionaldistricts) ([PERSON], 1989). The lake which surrounds the city is included in the Natura 2000 network, offering unique habitats for many endangered fauna and avifauna species ([PERSON], 2015).
### Skoutaris Mansion Case Study
Two main historic districts have been maintained in the city center of Kastoria, Doltso and Apozari, where manions with local architecture preserve the traditional character of the area.
Skoutaris Mansion is located on the south side of the city, along the coast of the lake in the Doltso traditional district and it is a representative building of the area (Figure 2). The ground plan has a 'II' shape, with windows on the first floor, while the windows on the second floor stand out for their larger size. The main entrance of the building is located on the south side, between the two basements. There is also a courvard with vegetation both on the main side of the building and on the back.
## 3 Materials and Methods
In order to extract information regarding the manion, high resolution satellite data were used and terrestrial laser scanning techniques were exploited. The satellite images were used to locate the building, interpret the physiogonomy of its surrounding area and monitor the district, while the terrestrial laser scanner was used to obtain 3D colored point clouds to give a realistic impression of the structure's facade and provide the ability to create highly detailed plans and designs.
### Satellite Data
To get detailed information about the city of Kastoria and the area where the Skoutaris Mansion house is located, high resolution multispectral QuickBird satellite imagery was used that was acquired from DigitalGlobe on 2010-07-29 (Figures 3 and 4). It is a cloud-free, standard ortho-ready Level-2A product (radiometrically and geometrically corrected), with projection information: UTM, zone 34, spheroid & datum WGS 84. QuickBird satellite was launched on October 18 of 2001. The sensor has 4 multispectral bands with a resolution of 2.44 m at - Band: 1 - Blue (0.450-0.520 um), Band 2 - Green (0.520-0.600 um), Band 3 - Red (0.630-0.690 um), Band 4 - Near Infrared (0.760-0.900 um) (QuickBird Information Sheet, 2021).
Further digital processing of the image was performed to extract useful information for additional interpretation and monitoring of the surrounding area. The original image was enhanced through decorrelation stretch algorithm, which applies a contrast stretch to the principal components of the image in order to decorrelate the bands and produce a strongly colored output image with high contrast using the Principal Components Transformation (ERDAS Field Guide, 2011). After the implementation of spectral enhancement, areas covered by vegetation are highlighted, buildings are delineated as shadows are minimized and color differentiations in the lake are more evident (Figure 5).
The final step of digital processing included the construction of a model in order to extract information regarding the rooftors of the buildings sited in the surrounding area of Skoutaris Mansion. Through the exploitation of the high resolution that QuickBird imagery offers, an object-oriented approach was followed employing the spatial components in addition to spectral properties.
Based on the model, visual image interpretation cues for a feature are quantified, while machine learning components are trained using these cues which are ultimately applied to the imagery in order to extract the features. The quantification is achieved through algorithms which perform raster contouring, but also incorporate object and vector level processing in order to yield a spatially matched, precise shape depicting the studied features. The generated output accurately reflects the image content (IMAGINE Objective, 2010).
The workflow of this study included the following steps: raster pixel processing, segmentation, probability and size filtering, raster to vector conversion, vector object operations and vector clean-up operations. The results of the performed model are presented in Figure 6. The roof of Skoutaris Mansion in the yellow circle, as well as the roofs of the surrounding buildings, are depicted in red color.
The final layer representing the rooftops of the buildings could be very useful for interpretation purposes. Ambiguities related to the roof variations in size, color and shading due to different materials used or different orientations towards the sunlight were addressed using several representative roof samples.
The extracted information could extend the monitoring capabilities regarding the surrounding area and facilitate the detection of possible changes through time in future studies (time series analysis).
### Laser Scanner data
Terrestrial laser scanning could be considered an effective tool in the civil engineering domain for digitizing large objects or entire scenes in order to acquire three-dimensional spatial information. The main steps that are followed include data acquisition, data process and data visualization ([PERSON], 2003). TLS systems are active devices emitting their own radiation. They can accurately reconstruct scanned buildings through the generation of millions of 3D points, providing higher resolution and accuracy on building facades and low sloped roofs ([PERSON], 2008). The 3D point clouds that are generated include points that have location coordinates in space, as well as color information and they represent the scanned building ([PERSON] et al., 2010).
In this study, FARO Focus 3D S120 phase-based laser scanner was used, which was provided by the Laboratory of Photogrammetry-Remote Sensing of the School of Civil Engineering in AUTH (Figure 7). The Focus 3D is a high-speed 3D laser scanner that uses advanced laser technology (class 3R laser) for detailed measurement and documentation. It is an autonomous portable system able to acquire 3D point clouds of objects, producing an invisible laser beam with a wavelength of 905 nm. Its range focus is between 0.60 m - 120 m, vertical field of view 305\({}^{\circ}\) and horizontal 360\({}^{\circ}\). It is characterized by a high measuring speed at a maximum of 976.000 measuring points per second and has a distance accuracy of up to 42 mm.
The scanner is also equipped with an internal color camera giving the ability to produce photoreactifical 3D color scans (color overlay). The generated point clouds and images are saved on the inserted SD memory card and thus can be easily transferred to another device (Faro Laser Scanner Focus3D User Manual, 2013).
Figure 5: Skoutaris Mansion in the black circle (QuickBird subset image after decorrelation stretch).
Figure 6: Roofs in red color acquired from QuickBird subset image through object-oriented feature extraction (roof of Skoutaris Mansion in the yellow circle).
The scanner was set up horizontally using a tripod. The scanning process included the inclination of laser pulses changed by a reflecting mirror, reflection of laser pulses on the surface of the building and reception of reflecting laser signals leading to an effect. Through this process, dense 3D coordinate information was acquired efficiently and accurately across the entire surface. Colored scan recording was also enabled, allowing the device to take color photos of the scanned environment with the integrated color camera. These photos were taken right after the laser scan and were used in the point cloud processing to automatically colorize the recorded scan data. The scan profile was set to outdoor conditions fitting the needs of the scene and the desired scan quality.
A total of four external scans were acquired to create the cloud of the main building facade. The position of the scanner in all four scans is presented in Figure 8. The scans were acquired consecutively and continuously in order to achieve the same external lighting conditions. The duration of each scan was approximately 8 minutes (expected scan time is in accordance with the selected resolution, quality value and scan range).
In order to ensure that the system would generate reference points and register signal contacts, artificial targets were placed in the scene. The targets that were used to extract distinct points were five spheres of bright material. The spheres were placed in positions that covered the scene horizontally and vertically, at different heights and were completely visible in all scans (Figure 9).
## 4 Results
The acquired data that carried highly detailed spatial information were imported into SCENE and Pointools Edit software programs for further processing which included point cloud visualization. A unified point cloud was obtained through the control points, the coordinates of which guided the registration of the four adjacent point clouds. The transformation of raw point clouds into a unified coordinate system uses scanned targets in a local coordinate system to solve the transformation parameters. After registering the scans using corresponding points, the software constructs a non-redundant surface representation, where each part of the measured object is only described once. The scanned building in 3D along with its surrounding landscape is presented in Figure 10.
The main facade of Skoutaris Mansion after the scanning process is presented in Figure 11. The generated model can be used for measuring distances and angles by clicking onto distinct points. Employing facade's ortho-projection, measurements of basic elements can be done directly onto the image. The measurement capabilities of the building, in combination with the detection and monitoring capabilities offered by enhanced satellite images of the area could provide useful information for the restoration and preservation of theansion in the future.
## 5 Conclusions
During the last decade, TLS techniques have played a leading role in building documentation, offering valuable tools regarding civil engineering issues such as urban planning, monitoring, reconstruction, preservation and restoration of buildings.
Cultural heritage documentation in particular could benefit from the extraction of useful information through the exploitation of terrestrial laser scanning along with satellite data. New digital technologies can provide fast and accurate 3D documentation and measurement capabilities of space and structures compared to conventional survey techniques. TLS is a non-invasive technique that allows continuous monitoring without coming into physical contact with the scanned object (very useful especially in unreachable places).
Acquisition of detailed information for the comprehensive determination of structures' facades and types of materials can be achieved by generating reliable and detailed 3D models. Furthermore, the acquired data in digital format can be integrated into CAD systems for measurements or change detection studies (e.g. 3D models of cultural buildings acquired before and after the restoration).
This paper presents the study of a manison house of cultural significance using TLS techniques for facade documentation and satellite data for complementary information of the surrounding area. Four 3D point clouds were generated with FARO Focus 3D laser covering the main facade of the building and were connected through the overlapping areas acquired from the different station points (point cloud registration).
In order to extract additional enhanced data regarding the building's environment and phisogonomy of the surrounding area, satellite data of high spatial resolution were exploited. Further digital processing was applied to facilitate image interpretation. The spectral enhancement technique that was considered effective for this study was decorrelation stretch.
The exploitation of satellite imagery in order to locate the building and interpret its surrounding area could offer complementary, yet necessary information to the overall study. New satellites and sensors with various capabilities, high spatial resolution imagery, the policy of free data and other constantly evolving technologies in Earth Observation could provide continuous information, enabling the monitoring of changes in the surrounding area of the structures being studied. In addition, object-based image classification and feature extraction capabilities that are enabled through high spatial resolution imagery could build and maintain accurate geospatial content which could be directly merged into other geographic systems with minimal post-processing.
Especially in the case of cultural heritage buildings and the constraints related to the preservation of their phisogonomy, the exploitation of satellite data can provide useful information on this subject through time series analysis and continuous remote monitoring. In addition, satellite data combined with TLS techniques can contribute to further studies regarding building extraction and 3D city modeling.
As the acquired point clouds contained information only about the geometry and the reflectivity of the surface, the caption of a series of images through the built-in digital camera followed the scanning process in order to get more information about the color of the surface and link it to the points. The final point cloud representing the main facade of the Skoutaris Mansion could be used in a variety of applications and offer effective measurement capabilities of the surface geometry. Furthermore, additional scans including the other facades of the building could be used for spatial and statistical analysis, Building Information Modeling (BIM) or could be integrated into Geographic Information Systems (GIS). The scanned structure can also be presented in a navigable 3D mode in order to facilitate the observation and interpretation of the building or generate a navigable model that could be used for virtual tourism purposes (virtual building visualization).
Restrictions and limitations that could appear in TLS techniques include among others difficulties in obtaining all the perimetrical facades of the building due to other buildings or obstacles, errors due to incorrect artificial target placement and poor registration (uneven distribution of the targets around the scene), errors that occur in the case of buildings with high or sloped roofs, as well as missing information for areas that could not be scanned from any station point.
In this study, in order to face the problems that may occur due to insufficient acquisition of data during the scanning process in the areas of the facade covered by the thick bush and the debris container in front of the structure, two additional scanning
\begin{table}
\begin{tabular}{|c|c|} \hline Point Error & \(<8\) mm & \(>20\) mm \\ \hline Overlap & \(>25.0\) \% & \(<10.0\) \% \\ \hline \end{tabular}
\end{table}
Table 2: Thresholds for point error and overlap (FARO SCENE User Manual, 2016).
Figure 11: Skoutaris Mansion main facade.
\begin{table}
\begin{tabular}{|c|c|} \hline & Skoutaris Mansion \\ \hline Maximum Point Error & 6.1 mm \\ Mean Point Error & 4.8 mm \\ Minimum Overlap & 42.1 \% \\ \hline \end{tabular}
\end{table}
Table 1: Scan Point Statistics.
stations were used (Ar1002 and Ar1003). Increasing the number of scans facilitated the record of the points that could not be captured from the initial scanning locations, resulting in a complete representation of the main facade. Regarding difficulties in scanning procedure that could appear due to existing buildings or other obstacles in close proximity to the scanned building, a combination of traditional terrestrial techniques and structure-from-motion (SFM) photogrammetry could be used.
In conclusion, the exploitation of TLS techniques and digital processing of satellite imagery could contribute to the information collection and digital inventory technologies for the restoration and preservation of cultural heritage buildings.
## Acknowledgements
The author wishes to thank the Ephorate of Antiquities of Kastoria for providing permission to acquire the necessary data and acknowledge the support of surveying engineer [PERSON] for his valuable help in data collection.
## References
* [PERSON] (2006) [PERSON], 2006. _Integration of Laser Scanning and Photogrammetry for Heritage Documentation_. Doctoral Thesis, Institute of Photogrammetry, University of Stuttgart.
* ERDAS Field Guide(tm) (2013) ERDAS Field Guide(tm), 2013. Intergraph Corporation, Erdas Inc. U.S.A, 445-446.
* Faro Laser Scanner Focus3D User Manual (2013) Faro Technologies Inc.
* Faro SCENE User Manual (2016) Faro SCENE User Manual, 2016. FARO Technologies Inc., 40.
* Haddad (2011) Haddad, N.A., 2011. From ground surveying to 3D laser scanner : A review of techniques used for spatial documentation of historic sites. _Journal of King Saud University-Engineering Sciences_, 23(2), 109-118.
* IMAGINE Objective User's Guide (2010) IMAGINE Objective User's Guide, 2010. Erdas Inc. U.S.A.
* [PERSON] (2019) [PERSON], 2019. _Research on the potentials of remote sensing techniques and interpretation of optical satellite data in civil engineering issues_. Doctoral Thesis, Aristothe University of Thessaloniki.
* [PERSON] (1989) [PERSON], 1989. _Kastoria: Greek Traditional Architecture_. Melissa Publishing House, Athens.
* [PERSON] et al. (2012) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2012. Generation of virtual models of cultural heritage. _Journal of Cultural Heritage_, 13, 103-106.
* [PERSON] and [PERSON] (2007) [PERSON] and [PERSON], 2007. Laser scanning-Principles and applications. _Proc. 3 rd International Exhibition and Scientific Congress GeoSiberia_, Novosibirsk, Russia, 25-27 April, 2007, 93-112.
* [PERSON] (2015) [PERSON], 2015. Institutional framework and protection of traditional settlements-the case of Kastoria. _Proc. Protection of Traditional Settlements and Contemporary Architectural Design_, Society for the Environment and Cultural Heritage, Kastoria, Greece, 1-13.
* [PERSON] (2008) [PERSON], 2008. Generating building outlines from terrestrial laser scanning. _International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences_, XXXVII (Part B5), Beijing, China, 3-11 July, 2008, 451-456.
* QuickBird Information Sheet (2021) QuickBird Information Sheet, 2021. DigitalGlobe Advanced Ortho Series. [[https://earth.esa.int/oegateway/documents/20142/37627/DigitalGlobe-Advanced-Ortho-Series.pdf](https://earth.esa.int/oegateway/documents/20142/37627/DigitalGlobe-Advanced-Ortho-Series.pdf)]([https://earth.esa.int/oegateway/documents/20142/37627/DigitalGlobe-Advanced-Ortho-Series.pdf](https://earth.esa.int/oegateway/documents/20142/37627/DigitalGlobe-Advanced-Ortho-Series.pdf)) (3 May 2021).
* [PERSON] (2005) [PERSON], 2005. Cultural heritage 3D reconstruction using high resolution laser scanner: New frontiers data processing. _Proc. CIPA 2005 XX International Symposium_, Torino, Italy, 26 September-1 October, 2005, 1-6.
* [PERSON] (2003) [PERSON], 2003. Terrestrial laser scanning technology, systems and applications. _Proc. 2 nd FIG Regional Conference_, Marrakech, Morocco, 2-5 December, 2003, 1-10.
* [PERSON] et al. (2013) [PERSON], [PERSON] [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], 2013. Integrating radar and laser-based remote sensing techniques for monitoring structural deformation of archaeological movements. _Journal of Archaeological Science_, 40(1), 176-189.
* [PERSON] et al. (2010) [PERSON] [PERSON], [PERSON] and [PERSON] [PERSON], 2010. The use of terrestrial laser scanning in the renovation of historic buildings. _Proc. 8 th International Symposium on the Conservation of Monuments in the Mediterranean Basin: Monument Damage Hazards & Rehabilitation Technologies_, Patras, Greece, 31 May-2 June, 2010, 1-13.
* [PERSON] (2012) [PERSON], 2012. _Kastoria through past and present times_. ThessPrint S.A., Municipality of Kastoria.
* [PERSON] et al. (2014) [PERSON], [PERSON] and [PERSON], 2014. The benefit of 3D laser scanning technology in the generation and calibration of FEM models for health assessment of concrete structures. _Sensors_, 14(11), 21889-21904.
|
isprs
|
TERRESTRIAL LASER SCANNING AND SATELLITE DATA IN CULTURAL HERITAGE BUILDING DOCUMENTATION
|
A. Karagianni
|
https://doi.org/10.5194/isprs-archives-xlvi-m-1-2021-361-2021
| 2,021
|
CC-BY
|
isprs/1432ba17_299c_4e74_bc11_cecb11b131ba.md
|
# Indoor space location model based on location service
[PERSON]
Corresponding author
[PERSON]
Corresponding author School of Resources and Environment, University of Electric Science and Technology of China, 2006 Xiyuan Avenue, West Hitech Zone, Chengdu, Sichuan 611731, China; {[EMAIL_ADDRESS], [EMAIL_ADDRESS], [EMAIL_ADDRESS]} Institute of Remote Sensing Big Data, Big Data Research Center of University of Electric Science and Technology of China
###### Abstract
Location is the basis for the realization of location services, the integrity of the location information and its way of representation in indoor space model directly restricts the quality of location services. The construction of the existing indoor space model is mostly for specific applications and lack of uniform representation of location information. Several geospatial standards have been developed to meet the requirement of the indoor spatial information system, among which CityGML LOD4 and IndoorGML are the most relevant ones for indoor spatial information. However, from the perspective of Location Based Service (LBS), the CityGML LOD4 is more inclined to visualize the indoor space. Although IndoorGML is mainly used for indoor space navigation and has description (such as geometry, topology, and semantics) benefiting for indoor LBS, this standard model lack explicit representation of indoor location information. In this paper, from the perspective of Location Based Service (LBS), based on the IndoorGML standard, an indoor space location model (ISLM) conforming to human cognition is proposed through integration of the geometric and topological and semantic features of the indoor spatial entity. This model has the explicit description of location information which the standard indoor space model of IndoorGML and CityGML LOD4 does not have, which can lay the theoretical foundation for indoor location service such as indoor navigation, indoor routing and location query.
Footnote †: Corresponding author
## 1 Introduction
In recent years, with the development and popularization of wireless communication, perceived positioning and internet technology, location-based services have a lot of applications in traffic, logistics, emergency response and people's daily life. The hotspot of the research in the field of international location service has also been derived from the outside to the indoor space, and the indoor space model is the prerequisite and the key to realize the location service. Location play an important role in the field of location awareness and context-aware systems. Especially in the ubiquitous field of computing, where location is often considered an important source of context ([PERSON], 1995). The objects of the outdoor environment are usually located in a unified Euclidean coordinate system, so that it can be absolutely positioned, however, indoor positioning is influenced by the indoor environment, the location of indoor spatial entity only through the local coordinates to realize relative positioning. In the process of finding the road or inquire some place in indoor space, influenced by human cognition, the way of describing the absolute direction of the \"east, south, west, north\" is no longer suitable for the representation of the indoor space, and the approach of \"front, back, left, right\" is more suitable for the relative positioning which conform with the human cognition for indoor space. At present, many scholars have studied the indoor space model in the world. The existing indoor space model only represents part of indoor spatial information, such as geometric model, symbolic model and semantic model, the scope of application of such model is limited. IndoorGML is a standard data model to represent, store and exchange indoor spatial information, which contains the geometric, topological and a small amount of semantic information of the indoor spatial unit, but lacks the description of location-related semantic location information such as \"upstairs\", \"downstairs\" and \"opposite\". In this paper, explicit representation of indoor location information consisting of indoor geometric location and semantic location is presented based on the IndoorGML standard, and the model proposed in this paper integrates the semantic relative location information such as semantic direction and semantic distance to construct an indoor space model suitable for positioning, navigation and route planning.
The rest of the paper is organized as follows: section 2 reviews recent research progress on modeling locations. In section 3, we explain the basic concepts of IndoorGML. And in section 4, we propose our model based on the IndoorGML. A simple use case of proposed model is shown in section 5. Finally in section 6, we conclude this paper and put forward the work to be done in the future.
## 2 Related work
Indoor space model, as well as spatial knowledge representation, has attracted much attention in indoor LBS community. Earlier indoor space models attempt to differentiate the concept of \"geometric\" and \"symbolic\" models ([PERSON], 1998). The object of the symbolic model has a unique name for identifying them in all entities, such as room numbers. On the other hand, locations and objects in geometric models are represented as points, areas or volumes within a reference coordinate system. From the perspective of indoor LBS, according to the structural characteristics of the model, indoor space model can be categorized as geometric space model, symbolic space model and semantic space model.
The geometric space model is usually a geometric description of the indoor entity, which provides accurate geometric information for the model in the form of coordinates. Distance between two objects can be calculated based on their coordinates. In a representative geometric model, the boundary information of the entity is extracted on the basis of IFC, and it is networked to the two-dimensional plane to realize the indoor path planning ([PERSON] et al., 2013). Also, a triangular grid partition of indoor space is proposed ([PERSON] and [PERSON], 2007), and an efficient barrier path query is realized by constructing a graph model based on triangular grid. The symbolic space model represents all the objects in the indoor space as symbolic elements with a specific ID tag, generally used to describe the topological relationship between objects. Symbols can be divided into sets and subsets, which can be organized in a hierarchical structure through a small amount of semantic information of indoor space objects ([PERSON] and [PERSON], 2003). Semantic space model represents the variety of types of entities in indoor space, as well as their properties and relationships, usually associated with the ontology. The indoor navigation ontology model is proposed based on ontology theory ([PERSON] et al., 2006).
The geometric, topological and semantic information of the indoor space model are indispensable to the application of indoor location service. How to comprehensively consider these three types of information is the focus of the research on the indoor space model based indoor location service. The existing standards representing indoor spatial information (such as indoorGML, CityGML LOD4) all contain the geometric, topological, and semantic information of the indoor space, but the CityGML LOD4 is more inclined to visualize the indoor space. Although indoorGML is mainly used for indoor space navigation, however, this standard contains a small amount of semantic information, and lack of explicit representation of location information. Therefore, based on the IndoorGML standard, this paper constructs an indoor space location model considering topological relations, geometric features, and semantic location information.
## 3 IndoorGML
IndoorGML is a standard data model to represent, store and exchange indoor spatial information and a XML application schema (OGC IndoorGML, 2014, [PERSON] et al., 2010) based on GML 3.2.1 (Open Geospatial Consortium, 2007). Unlike CityGML and IFC, IndoorGML focuses primarily on the representation of indoor spatial units (cell) rather than the indoor spatial features such as roof, ceiling, floor, and wall. In IndoorGML, a cell is the smallest organizational or structural unit of indoor space, which can be represented as a room, or a corridor. Entire indoor space is considered as a set of cells and each cell can't overlap with other cells. It is worth noting that each cell has a unique identifier, such as a room number, which is the only implicit semantic symbolic location information in the indoorGML standard model. Therefore IndoorGML provides a standard framework to represent geometry, network, and semantics of cells in indoor space. We briefly explain how to represent these aspects in IndoorGML in this section.
### Geometric Representation
Although, geometric representation of cell is not the focus in IndoorGML, there are three options to represent geometry features of indoor space. As shown in Figure 1, the first option is to reference an object defined in other data set such as CityGML, which contains its geometric property. The second option is to include geometric property of cell within IndoorGML. data, which is either a solid in 3D or a surface in 2D. The third option is not to include any geometry property of cell (OGC IndoorGML, 2014).
### Network Representation
Topology is an important part of cell space. Therefore, IndoorGML focuses on the explicit representation of topological relations. The node relationship graph (NRG) ([PERSON], 2004) represents the topological relationship between indoor spatial unit, such as adjacency and connection. NRG allows abstract, simplify and represent the topological relationships of 3D spaces in the indoor environment, such as rooms within a building. It can effectively achieve the indoor navigation and routing systems in the complex computing problems. The Poincare duality ([PERSON], 1984) provides a theoretical background for mapping the indoor space to the NRG representing the topological relationship. Figure 2 shows the topographic space and the corresponding NRG (OGC IndoorGML, 2014).
### Semantic Representation
Only a small amount of semantic information in IndoorGML can be used to represent symbolic location information for a cell, such as a room number. These semantic information also play a role in classifying indoor spatial units. For example, in the navigation model of IndoorGML, indoor spaces are divided into navigation spaces (such as rooms, corridors, doors) and non-navigational spaces (walls, obstacles, etc.). We can infer the connectivity between the cells based on these small amounts of semantic information. However, the indoor location based service requires more semantic location descriptions, which we describe in the next section.
## 4 Islm
IndoorGML is a standard data model to represent the geometric characteristics, topological features and semantic features of the indoor space, which laid the model foundation for the indoor navigation, path planning and other location services. However, the standard lacks explicit representation of indoor location information, it is difficult to meet the growing demand for
Figure 1: Three options to represent geometry in IndoorGML.
Figure 2: The topographic space and the corresponding NRG
indoor LBS. Therefore, in this paper, an indoor space model (ISLM) containing location information is proposed based on IndoorGML. Figure 3 is a UML diagram of the ISLM presented in this paper. As shown in Figure 3, the orange part of the figure is the representation of the indoor space object in the IndoorGML core model, the lower green part is the location information added to the indoor space unit based on IndoorGML. In the ISLM, the location of the spatial cell is defined from both geometric location and semantic location. The detailed definition is described below.
### Topological Representation in ISLM
As shown in Figure3, the topological relationship between spatial units of ISLM is similar with IndoorGML. According to the notions of geographic features defined by ISO19109, objects in real world can be represented as _PrimalSpaceFeatures_ consisting of _CellSpace_ and _CellSpaceBoundary_ classes. A _CellSpace_ represents one spatial object, such as a room or a corridor. And a _CellSpaceBoundary_ is used to describe the boundary of the each spatial object, usually represent as a door or a wall. The _State_ has association with the corresponding _CellSpace_ class which represents a spatial unit in primal space, which represents a node in dual space depicted in Figure 2. _Transition_ is an edge that represents the adjacency or connectivity relationships among nodes representing spatial information added to the indoor space unit based on IndoorGML. In the ISLM, the location of the spatial cell is defined from both geometric location and semantic location. The detailed definition is described below.
### Geometric Representation in ISLM and Geometric Location
The second option defined by IndoorGML (Figure 1) is used to represent geometric information in ISLM, which include geometric property of cell within ISLM, and a cell is either a solid in 3D or a surface in 2D. When the _CellSpace_ is represented as a _gml.Solid_ in three dimensional space,
Figure 3: UML diagram of ISLM
corresponding _CellSpaceBoundary_ is represented as _gml:Surface_, and when the _CellSpace_ is represented as a _gml:Surface_ in 2 dimensional space, corresponding _CellSpaceBoundary_ is represented as _gml:Curve_. Also, the geometric information is modelled in _CellSpace_ in _ISLM_. In dual space, The _State_ is geometric represented as _gml:Point_ and _Transition_ is represented as _gml:Curve_.
In _ISLM_, the _GeometryLocation_ allows to define a cell in geographic space that is measured with respect to a standard coordinate system, which is decided by _SararLocation_ and _EndLocation_ represented by geometric coordinate information, such as (X\({}_{1}\), Y\({}_{1}\), Z\({}_{1}\)) and (X\({}_{2}\), Y\({}_{2}\), Z\({}_{2}\)) coordinate. A _CellSpace_ with a geometric location can be used as a reference object for the semantic location of another _CellSpace_. The geometric information is associated with _CellSpace_ or _State_ instead of _CellSpaceBoundary_ and _Transition_, since the location of a place that people want to reach usually can be represented as a cell instead of the boundary of a cell, therefore, the geometric information of _CellSpace_ is our focus rather than the location of _CellSpaceBoundary_. Also, _Transition_ does not contain any geometric location information, it represents the topological relationship between the cells in cellular space, so the hidden relative location relationship can be inferred from the NGR represented in dual space. In this paper, we focus on the explicit relative location relationship of the semantic description, which described detail in 4.3 section.
### Semantic Location Representation in _ISLM_
As shown in Figure3, the _IndoorLocation_ is the root of indoor location information. Based on _IndoorLocation_, there are two general ways to define indoor location in detail: _GeometryLocation_ and _SemanticLocation_. The _GeometryLocation_ class is described in Section 4.2, and this section the _SemanticLocation_ class will be introduced in detail.
In the _ISLM_, _SemanticLocation_ class consisting of _SemanticDescription_, _SemanticDirecton_, _SemanticOrder_, _SemanticTopology_ and _SemanticDistance_ represent the relative location of _CellSpace_ and the boundary of _CellSpace_ (_CellSpaceBoundary_) in primary space. A cell in indoor space (such as room) can be semantically identify by floor number, room number, room name and room type through semantic description information of _SemanticDescription_ class. Users of mobile terminals are usually navigated through the direction of \"east, south, west and north\" provided by navigation systems in outdoor space, but this approach does not apply to indoor navigation. In the indoor environment, there is more direction to rotate in comparison with the outdoor environment. People in the indoor space is easy to get lost, it is more difficult to identify the direction after the rotation of direction. So for them the perspective of indoor spatial cognition, the way describing indirection using \"front, back, left, right\" is more suitable for the relative positioning in the indoor space, which is depicted in _SemanticDirection_ class. The definition of semantic direction is shown in Figure 4. The direction of the front and back is decided by the user's position relative to the room (inside or outside). In addition, since the spatial entities such as doors and windows are usually represented as the boundary of the cell, it is difficult to use the geometric coordinates for absolute positioning. People can move in indoor space through the first exit, the next exit, the second exit, the last exit and other location information representation for indoor navigation, such way of the description of location is provided by _SemanticOrder_ class. The _SemanticTopology_ class which explicitly defines adjacency, connectivity, accessibility and separation relationships among the indoor cells. In order to achieve the indoor optimal route query, the nearest query and other indoor location service, the semantic distance description (_SemanticDistance_) is added to the _ISLM_ in this paper. _SemanticDistance_ class defines distance description (including a numerical value of distance) and a semantic description such as:\" Three meters away from the known location\".
## 5 Use Case of _ISLM_
In this section, a simple example is used to instantiate the description of location information in the _ISLM_. As shown in Figure 5, a three-dimensional model of laboratory area in the second floor in a building is presented. A room is represented as a cell and a door is represented as the boundary of the room. In this case, the geometric location of the cell is determined by the start position and the end position, which is represented as (200, 50, 60) and (100, 80, 90) in the Cartesian coordinate system.
In order to better describe the semantic location information of the spatial unit, the three-dimensional model shown in Figure 5 is mapped to the two-dimensional plane, as shown in Figure 6. The orange part of the figure represents a cell in indoor space, which is defined by _SemanticDescription_ as \"A room is located on the 2 nd floor and is called the geospatial laboratory, which has the function of accommodating other spatial entities\". The semantic location of the green part of the figure can be defined relative to the known location. For example, from room 2014 to the lift, the semantics can be described as follows: Turn left until the corner and then turn right, the location of the last door is the destination. Also, the relative semantic location of room 2105 can be described as \"Go out and turn right, the second room on the left\" or \"a room adjacent to the right side of the room 2014, opposite the room is the target\".
## 6 Conclusion and Future Work
IndoorGML is an international standard to exchange, store indoor spatial information. It not only contains the geometric and topological information of the indoor spatial unit, but also contains a small amount of semantic information. However, the model lacks the representation of location-related semantic information. Based on the IndoorGML standard, this paper constructs an indoor space location model containing geometric, topological and semantic location information. The model can set the theoretical basis for the indoor space model of indoor location service. Then we need to study the method of data organization of the model on the basis of the ISLM to apply to the specific application, in which the definition of the relevant location information should be further refined or improved to suit the specific location service scenario, such as the definition of semantic distance.
## Acknowledgements
This research is supported by National Natural Science Foundation of China (No. 41471332, 41101354 and 41571392), the National High Technology Research and Development Program of China (No. 2016 YFB052300) and the Fundamental Research Funds for the Central Universities (No. ZYGX2015J1113).
## References
* [1] [PERSON] (1995) A system [PERSON] for context-aware mobile computing. PhD thesis, Columbia University.
* [2] [PERSON]. Supporting location-awareness in open distributed systems. PhD thesis, Department of Computing, Imperial College London, 1998.
* [3] [PERSON], [PERSON], [PERSON] al. The IFC based Path Planning for 3D Indoor Spaces[J].Advanced Engineering Informatics,2013,27(2):189-205.
* [4] [PERSON], [PERSON] Efficient Triangulation-Based Pathfinding [J]. Aaai, 2007, 1338(9):161-163.
* [5] [PERSON], [PERSON] On a Location Model for Fine-Grained Geocast[J]. Lecture Notes in Computer Science, 2003, 2864:18-35.
* [6] [PERSON], [PERSON], [PERSON], et al. Semantically enriched navigation for indoor environments.[J]. International Journal of Web & Grid Services, 2006, 2(4):453-478.
* [7] Open Geospatial Consortium, OGC IndoorGML, OGC 14-005R3, 2014.
* [8] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON],[PERSON], Requirements and Space-Event Modeling for Indoor Navigation, OGC 10-191r1. 2010.
* [9] Open Geospatial Consortium, OpenGIS Geography Markup Language (GML) Encoding Standard Version 3.2.1, OGC 07-036 (2007).
* [10] [PERSON] 2004. A Spatial Access Oriented Implementation of a Topological Data Model for 3D Urban Entities, GeoInformatica 8(3), 235-262.
* [11] [PERSON], Elements of Algebraic Topology, Addison-Wesley, Menlo Park,CA, 1984.
Figure 6: Floor plan of laboratory area
|
isprs
|
INDOOR SPACE LOCATION MODEL BASED ON LOCATION SERVICE
|
Y. Zhou, G. Zeng, Y. Huang, X. Yang
|
https://doi.org/10.5194/isprs-archives-xlii-4-w7-49-2017
| 2,017
|
CC-BY
|
isprs/9d2777f5_be40_4a3d_884c_f6e6d695c2d2.md
|
**INtegration of REMOTE SENSING AND GIS IN LANDUSE PLANNING FOR SUSTAINABLE NATURAL RESOURCE MANAGEMENT WITHIN THE MOUNT CAMEROON REGION-WEST AFRICAN.**
**[PERSON]**
**Surveys department Buea**
**South - West Province Republic of Cameroon**
**Working group VII/2**
**KEY WORDS**: **GIS, Remote, Sensing, landuse Planning, Forest, Agriculture.**
**ABSTRACT**
Population growth in developing countries is a key factor to environmental degradation. The mount Cameroon region in West Africa is one of the sites where the equatorial rainforest is disappearing at a fast pace due to agricultural plantation expansion and urban development. We shall see how remote sensing data and GIS can contribute in finding solutions to pertinent problems in land use planning.
The mount Cameroon is located at latitude 4\({}^{\circ}\)10 and longitude 9\({}^{\circ}\)20. It covers an area of 250.000 hectares with a variety of land use patterns. It is the highest peak in West Africa (4100m). There are many activities leading to environmental degradation carried by the local populations.
It is for this reason that the government of Cameroon with assistance from international NGOs has created a project to develop and implement a landuse plan for the sustainable management of natural resources within the mount Cameroon region. There are many land cover types and a multiple of uncontrolled uses. The Cameroon Development Corporation (CDC) is the major stakeholder within the study area and occupies 85 374 hectares of land. Part of this land is developed with industrial agriculture, dwelling units, infrastructures and the other part is covered with rainforest for future plantation expansion. This creates a conflict between economic development and natural resource conservation.
A base map of the project area was developed and all the land use patterns were mapped as individual thematic layers.
Administrative boundaries, forests, plantations, settlements, coastal zones ( mangroves), water courses, road network, soil type and wildlife.
Significant achievement in participatory land use planning within the project region is from the result of remote sensing and GIS applications as is presented in this paper. The method of data acquisition and input in the GIS database will be discussed.
The paper will also elucidate how GIS and remote sensing techniques are applied in urban planning, agriculture and nature conservation. Spatial data acquisition and dissemination methods will be discussed.
All stakeholders interest were generated in thematic layers ( overlays) in the GIS database and presented as maps used in planning meetings at village and regional levels.
The land use map shows the best options as decided by the communities concerned; It responds to the following questions:
* What should be done, the selected changes to land and where they should be implemented?Mapinfor 4.1 and ArcInfo Gis software were used to generate spatial solutions to these problems.
Finally land is rationally used according to the plan conceived and agreed by all stakeholders and the resources are now managed for the benefit of the present and future generations.
## Introduction:
In Cameroon the concept of landuse planning just started some few years ago initiated by conservation projects working on natural resource management. Bilateral cooperation signed by the Cameroon government with developed countries Germany and Britain has come to the assistance of local populations within the Mount Cameroon region to develop a landuse plan for the sustainable management of natural resources. Through their technical services such as GTZ and DFID working in collaboration for a common goal on Mount Cameroon, there was a need to develop a base map of the project area at scale 1:200000 and a current land use map at scale 1:75000.
## 1 The Project Region
For administrative and management purposes, the project region is defined by administrative units boundaries. This include six subdivisions and one district as follows:
Mbonge subdivision, Buea, Tiko, Limbe, Bamusso and Idenau district where we find Debuncha the second wettest place in the world after Chiranpunji. These boundaries are natural (water courses, hills) or artificial boundaries (bench marks).
### Landuse types in the Project Region
Landuse has been classified in a manner that will be handled in the GIS database. This include: Agriculture (Industrial, Subsistence,) wildlife, forest, (protected and Communal forest), settlements, water course network, road network.
### Landuse Allocation
#### 2.2.1 Agriculture:
This comprises industrial, commercial, and subsistence agriculture.
Industrial Agriculture is developed by the Cameroon Development Corporation (CDC) and plantations PAMOL du Cameroun. The CDC is the major stakeholder in the study region and deals with; oil palm, tea, pepper, rubber and banana, planting and exporting. This is also the second employer after the Cameroon government. Commercial agriculture is managed by individuals who plant and market rubber (latex), cocoa, palm oil and also foodstuff.
As it concerns subsistence agriculture, it is actually difficult to have farmers solely working for consumption because every one has been constraint by the economic crisis to generate income for subsistence thus making farmers to always sell part of their foodstuff.
#### 2.2.2 Wildlife:
This has been considered a landuse type because of the socio-economic impact it has on the national economy and ecotourism. Most endangered wildlife species are found on mount Cameroon.
The location of hunters'camps on the mountain according to Global Positioning System (GPS) mapping are within the elephant grazing range.
#### 2.2.3 **Forest**:
Two main categories of forests exist within the study area. Protected forests managed by the state for production and conservation purposes and communual forests which belongs to the local Community Settlement are built-up areas, they are either rural or urban settlements, these include : villages, cities, towns.
#### 2.2.4 **Water Courses:**
These are rivers, lakes, ocean., River Mungo, Lake Barombi Kotto, Barombi Mbo, River Meme and Atlantic Ocean.
#### 2.2.5 **Road Network:**
Is the landuse type that links one village, city or town to another. The various categories in the GIS database are listed below.
### 3.0 **Development of the base map for the project region.**
Field mapping was undertaken by the landuse planner and the cartographer. During the field work, meetings were held with the local and administrative heads who showed the limits of their administrative units. This information was marked on an existing IGN map of scale 1/200.000 and contained the following thematic layers:
- Principal tarred roads.
- Principal untarred roads.
- Foot paths.
- Secondary road
- Plantation road
- Administrative Boundaries
- Settlements (towns, villages)
- Water Course (water network)
- Contour lines.
- Roads network
- Protected Forests.
This information was digitized using PC Arc/Information software and a new project map was produced at scale 1:200.000.
With the project region map now available in digital and hard copy formats, there was a need to update. the base map with new data from various components of the project. During the previous years of the project pilot phase, many data was collected with the G.P.S., some from old maps at various scales others from existing literature. There was now a need to install a Geographic Information System (GIS) unit in the project.
### 3.1 **Stakeholders in the study area**.
- The Government of Cameroon GOC.
- Mount Cameroon Project MCP
- International NGOs GTZ, DFID,GEF
- Timber Exploiters
- Honey Collectors
### Landuse Conflict in the Project Region
The CDC plantations were created in 1947 by the Germans. They have demarcated these plantations with property beacons. The plantations cover 85374 hectares of land of which 2/3 of this area is within the Mount Cameroon Project Region.
This land begins from 0-1800 meters above sea level and most of it still under forest cover. The Cameroon Development Corporation (CDC) intends to expand the plantations within their leasehold boundaries, Mount Cameroon Projects intends to conserve biodiversity within the CDC land and farmers are encroaching in the protected forest for agriculture plantations development. There is a need to use GIS and remote sensing technique to find a solution
## 4 Remote Sensing application:
To better understand the present landuse, remote sensing data is required. A landsat TM 1986/87 Scene of the project region was ordered. The existing plantations and forest cover were mapped. Information gathered from CDC estate managers showed that suitable growing attitude for oil palms is from 0 - 400m above mean sea level. This was particularly around the West Coast near the Atlantic Ocean where there is a high quality equatorial rain forest. From the image spectral signature, different types of forest were mapped (dense forest, montane, degraded forest). This landsat image is used as a raster background for further interpretation. The image is in digital format on CD-Rom and hard copy at scale 1:75.000 in an Arc/info format. Although the image was quite old, it was the only image of Mount Cameroon with less than 30% cloud cover.
## 5 The Mount Cameroon GIS database
Mapinfo 4.1 is the software used in the MCP GIS and located in the folder named \"GIS data\" in the root directory of the hard disc drive C.
The data tables are as follow:
\begin{tabular}{l|l} \multicolumn{1}{c}{**Data Table 1**} \\ \hline
**Folder/Sub-Folder** & **Content** \\ \hline C:\(\backslash\)GIS data & Workspaces for print layouts \\ C:\(\backslash\)GIS data\(\backslash\)GPS data & GPS Point files \\ C:\(\backslash\)GIS data Layout & Scalebar tables \\ C:\(\backslash\)GIS data Scans & Scanned maps and map Info registration \\ C:\(\backslash\)GIS data Theme & Thematic map tables \\ C:\(\backslash\)GIS data\(\backslash\)topo & Topographic map table. \\ \hline \end{tabular}
## 6 GIS in Landuse Planning
Landuse planning is originally concerned with what should be done, where. Hence maps form a key element in the presentation of results.
### Natural Resource Management
#### Community Forest Development:
With the overlay of the CDC leasehold boundary and the altimetric theme, it is clear that the leasehold gets up to 1800m above mean sea level which is an area of high quality forest.
An overlay of the crops thematic layer with the topographic layer (altitude) showed that crops have been planted up to 150m above sea level and this area has been tested suitable for crops (oil palm) from \(0-~{}400\)m above sea level. The suitable area for plantation expansion was digitized and stored in the GIS database as a thematic layer. A map was produced showing the boundary of the leasehold and the suitable areas for plantation expansion was very visible on the map. This map was brought to the (EIA) Environmental Impact assessment of CDC plantation expansion meeting attended by all stakeholders within the study area.
It was evident that though, the area in question falls within the CDC leasehold land, it was not useful for plantation expansion. It was agreed in the meeting that a protected forest should be developed within this area not suitable for plantation development.
GIS technology has brought the forest to a common table of discussion.
\begin{tabular}{l|l|l|l} \multicolumn{4}{c}{**Forest reserves are in the study area table 2**} \\ \hline
**Year of creation** & **Name of Forest reserve** & **Area** & **Remarks.** \\ & & **(Hectares)** & \\ \hline
1939 & Bomboko Forest reserve & 26 677 ha & Hunting, poaching, production FR Partly degraded for farms with a foodstuff market in it. \\ \hline
1952 & Mokoko river Forest reserve & 9065 ha & Production forest reserve. \\ \hline
1952 & Bakweri Forest reserve & 9324 ha & Production forest reserve not existing again occupied by farmland \\ \hline - & Meme river forest reserve & 28 ha & existing. \\ \hline
1953 & Buea fuel plantation & 300 ha & Planted with eucalyptus trees. exploited for fuel wood \\ \hline \end{tabular}
#### 6.1.2 Agriculture sites location:
One of the themes in the GIS database is the soil map of the mount Cameroon region produced by IRA Ekona (Institute de la Researche Agricole). This map was digitized and serves as a background for various crops location. By overlaying the crops on the soil map, it is visible that certain crops, like oil palm, tea, rubber and banana have specific soils on which the yield is high. The GIS will assist to locate some soil types as potential areas for similar crops development. Pixel clusters will be created based on similarity of multispectral reflectance characteristic of the image to locate various soil types and their suitability for plantation development.
#### 6.1.3 Urban Planning:
With the high cost involved in aerial photography, remote sensing data and GIS are important tools in urban planning. Buea, headquarters of the South West Province is looking forward to using Spot images in mapping its urban expansion and landuse.
This image will be at scale 1:25000 and will be available in digital and analogue formats. Essential foundations of planning for long-termed measures of urban development can be gained very promptly by remote sensing. Development measures cannot be planned in a well-founded way if there is no basic inventory of the present land use potentials and limiting factors. Remote sensing is advantageous because optical information can be received with a temporal overview and a sectoral overlap. The monitoring of urban expansion and the analysis of settlements without planning can be accessed from remote sensing data. Initiated by the author and the Delegate of Urban affairs, it will be the first time that remote sensing data will be used for Urban development studies in Cameroon. If this project will be successful then many other towns in the Country will benefit from the technology. Satellite imagery mapping is fast and with a 10 meter spatial resolution, buildings Yards, roads, property boundaries, farm fields, and tree stands can be located.
#### Wildlife managements.
GIS offers powerful modeling tools for wildlife managers. A special model used to predict elephants distribution on mount Cameroon was based on GPS mapping. Wildlife guides of the mount Cameroon project were trained to use a GPS receiver. They made many trips up the mountain and within the rainforest on the mountain to identify wildlife grazing range and sightings. (mountain elephants, antelopes, monkeys, chimpanzees and birds). Most of the rainforests of the equatorial region of Africa are contiguous across the boarders of six central African Countries: Zaire, Gabon, Equatorial-Guinea, Cameroon, Congo and the Central African Republic and comprise one third of the range of the African elephant. To draw a management plan for wildlife management on Mount Cameroon, there was a need to map the distribution of wildlife sighting, movement tracks, hunters camps location and caves. Since this information can not be derived from the landsat satellite image, GPS mapping was the only technique available at the project office. GPS data was collected periodically by field staff who went to the mountain and forest. This data described the geographical location of each site in longitude and latitudes followed by a remark or observation. This data was later input in the GIS database on the base map of the project area in the computer.
Farms were mapped and plotted on the Bomboko forest reserve map digitized using the CAMRIS GIS software. (computer Aided Mapping and resource Inventory system).
\begin{tabular}{|l|l|l|l|} \multicolumn{4}{c}{**GPS \(\,\) Mapping \(\,\) table 3.**} \\ \hline
**Way Point** & **GPS Location** & **Mapping** & **Remarks** \\
**No.** & **Longitude (E)** & **Description** & \\ & **Latitude (N)** & & \\ \hline
091 & 9\({}^{\circ}\) 16 E & Bonakanda Village & This is the highest village located on mount \\ & 4\({}^{\circ}\) 12 N & & Cameroon \\ \hline
092 & 9\({}^{\circ}\) 12 E & Lava Track & The lava track width at this point is 800m \\ & 4\({}^{\circ}\) 17 N & & \\ \hline
093 & 009 12.E & Lava track & The lava track width at this point is 800m \\ & 04 17.N & & \\ \hline
094 & 009 12 637 E & Lava track & Hunting camp grouping hunters of Bonakanda village \\ & 04 17.706 N & & \\ \hline
095 & 009 11. E & Bonakanda 1 hunting & \\ & 04 17.621 N & village & \\ \hline
098 & 009 11E & Elephants & Seven elephants were grazing here \\ & 04 17 646 N & & \\ \hline
099 & 009 10. E & Lava track & 1.5 km wide \\ & 04 16 N & (1.5 km) & \\ \hline
100 & 009 10 E & Lava track & \\ & 04 16. N & & \\ \hline
101 & 009 10 N & End of lava & \\ & 04 16 N & & \\ \hline \end{tabular}
\begin{tabular}{|l|l|l|l|} \hline
103 & 009 09 E & Bonakanda II hunting & Six huts and one hut behind in this camp. \\ & 04 16 N & village & \\ \hline
104 & 009 09 N & Stream _(Maliba ma_ & This is an all season stream with a small \\ & 04 16 E & _mussingele)_ & waterfall. The only steam flowing on mount \\ & & & Cameroon \\ \hline
106 & 009 11 N & Bonganjo hunting & Hunters of Boganjo village assemble here \\ & 04 18 E & village & after hunting \\ \hline
108 & 009 10. E & Elephant track & This is an area where elephant food points \\ & 04 18 N & & were found. \\ \hline
109 & 009 10. E & Elephant track & Very recent elephant traces \\ & 04 18 N & & \\ \hline
110 & 009 10 E & Elephants & Two elephants were seen this point. \\ & 04 17 N & & \\ \hline \end{tabular}
One of the herbal preparations produce in Europe to treat prostate problems ( benigh prostatic hypertrophy and benign prostatic hyperplasia ) is produced from barks of the tree _prunus africana.._. Prunus Africana has a distribution in Africa and Madagascar. Limited to afromontane forest which are generally above 1500-2000m attitude. The barks of this medicinal tree exported from Cameroon in 1995 is 1,116 kg by the Groupe Fournier in France owner of Plantecam in Muttengene Fako division Cameroon. With the high demand of this product, villages around the Mount Cameroon are involved in the unsustainable exploitation of the tree barks for commercial purposes. The barks extract are used medically to treat prostate gland hypertrophy has led to the overexploitation of this plant which is a course for concern due to the exceptional importance of Afromontane forest for plant,bird and mammal conservation. For this reason an inventory of the distribution of _prunus africana_ on Mount Cameroon is necessary.
Remote sensing and GIS could be used to carry the inventory. This inventory is also possible using GIS overlays by mapping the corresponding growing altitudes 1500-2000m. the possible growing range of the species is mapped and sample plots randomly selected on the field within the growing range is extrapolated to the image to have an approximate number of prunus africana stands within the study area.
This method was used by the author to map the degraded parts of the Bomboko forest reserve using a GPS receiver. Hunters and farmers were used as resource persons to locate the last plantations location in the forest reserves. Hunters were very instrumental because they have to go through all farmland before seeing the wild animals.
**6.1.5 METHODOLOGY:**
Remote sensing data for forest resource an image mapping of the area with a good spectral resolution is ordered ground truthing is done with many waypoints recorded with a GPS within the areas of high _prunus africana_ population. These waypoints are down loaded from the GPS to the computer and points are created according to geographic locations. The distribution of these points will serve to interpret the image and the areas of _prunus africana_ can be mapped on -screen from the digital data.
### Institutional Setup of GIS in Cameroon
GIS technology in Cameroon is mostly limited to conservation projects and international Ngo's. Apart from CETELCAF in Yaounde that produces maps based on digitizing and updating. Government institutions are still backwards in the application of GIS and remote sensing technology.
In 1998, the author installed a GIS software Mapinfo on the computers in the National Advanced school of Public Works Buea -annex. If this is well developed the university of Buea can also be a beneficiary of this technology. International donors in the field of GIS development in developing Countries should assist in the promotion of this technology in Cameroon.
### Conclusion.
Data for the GIS database was got from existing maps digitized and converted in a digital format, field observation with the GPS and remote sensing data available on CD-Rom and hard copy map. Remote sensing and GIS technology has contributed in solving a landuse conflict between biodiversity conservation and plantation expansion. This has brought the forest to a discussion table with individual thematic layer overplayed one after another to show the possible areas of natural resource conservation. GPS application and GIS was the simplest means to illustrated the distribution of hunters camps within the forest reserve. This also enable to show the distribution of the elephant grazing range within the study area. The introduction of Land use planning within the Mount Cameroon region has led to the creation of a provincial land use planning Committee headed by the provincial governor. This committee has to replicate de Mount Cameroon example in the whole province.
### Acknowledgement
I am grateful to all those who assisted in typing of this paper and the mount Cameroon Project.
**References.**
* (GTZ) Gmbh, 1991, remote sensing. Tools for development pp13,24
* Chapman & Hall, 1994 mapping the diversity of nature pp107,
* ONADEF 1992, Normes de cartographie Forestiere aux echelles 1/50000 et 1/20000 annex 5, pp9.
* [PERSON], a.B. [PERSON] RU. [PERSON] 1997 trade in Prunus africana and the Implementation of CITES.pp9
* Chapman & Hall, 1995 introduction to environmental remote sensing pp 313
* Ecological Consulting, Inc CAMRIS, 1993 pp 169
* [PERSON], 98. implementation of a geographic information system p. 9.
* Nyohjam & Mosonge, 97 Hunters' villages and wildlife mapping Ekona -Lelu, Bonakanda and Bonganjo hunting villages. Report.
* Fao 93, Guidelines for land -use planning pp59.
* Spot image 98. Satellite imagery, An objective guide p.15.
|
isprs
|
GIS and Remote Sensing Assessment of Landslide Susceptibility along the Cameroon Volcanic Line: West Region, Cameroon
|
Raphel Etoyiva Abine
|
https://doi.org/10.21203/rs.3.rs-819468/v1
| 2,021
|
CC-BY
|
isprs/494f5c75_4cff_4e2d_911f_0c405342d2c3.md
|
A Fully Automatic Forest Parameters Extraction at Single-Tree Level: A Comparison of MLS and TLS Applications
[PERSON], [PERSON], [PERSON], [PERSON]
1 Department of Environment, Land and Infrastructure Engineering, Politecnico di Torino, Corso Duca degli Abruzzi 24, 10129 Torino, Italy - (claudio.spadavecchia, elena.belcore, nives.grassso)@polito.it
Footnote 1: Corresponding author
###### Abstract
Forests are vital for ecological, economic, and social reasons, and adopting sustainable forest management practices is necessary. While traditional forest monitoring techniques provide detailed data, they are time-consuming; conversely, geomatic techniques can provide more detailed data for forest resource management. This study aims to assess the suitability of Mobile Mapping Systems (MMS) with simultaneous localisation and mapping (SLAM) technology for precision forestry purposes in challenging environments. We compared the performance of MMS data with Terrestrial Laser Scanning (TLS) data and evaluated the Forest Structural Complexity Tool (FSCT), which was developed for TLS datasets, on MMS data. The case study area is a highly sloped coniferous forest in the Italian Alps affected by a severe fire in 2017. Data were processed using a fully automated open-source Python tool that detects each tree's position, Diameter at Breast Height (DBH), and height. The validation procedure was conducted with respect to the TLS point cloud manually segmented. The results show that using MMS with SLAM technology is suitable for precision forestry purposes in challenging environments and that FSCT performs well on MMS data.
Mobile Mapping, Remote Sensing, Forest Parameters Extraction, MLS, SLAM, ITD.
## 1 Introduction
Forests are essential resources for ecological, economic, and social reasons, and their protection and management can benefit from a complete understanding of tree distribution and composition. Forests (i) play a crucial role in regulating the Earth's climate by absorbing carbon dioxide from the atmosphere through the photosynthesis of chlorophyll; (ii) regulate the Earth's water supply through transpiration; (iii) protect the soil by reducing erosion, preventing landslides, and offering natural protection against rockslides; (iv) provide diverse ecosystems and guaranteeing biodiversity; (v) provide economic benefits both directly (e.g. timber production) and indirectly (e.g. tourism). These reasons lead to the primary need to adopt a sustainable forest management approach to improve risk prevention, production, protection, and preservation ([PERSON] et al., 2005). For this purpose, deepening and innovating monitoring, data collection, and processing techniques are necessary.
Traditional forest monitoring techniques are considered reliable, provide detailed data on forest conditions, and can be performed by surveyors relatively quickly; however, such methods are often time-consuming and require extensive in-situ work. Traditional monitoring is mainly carried out through visual inspection and the manual collection of data on the field, such as tree density, canopy size, trunk diameter (Diameter at Breast Height, DBH), tree height, health status (through the identification of any presence of disease and insect infestation).
Developing practical tools for forest resource management, such as adopting geomatic techniques and producing innovative carbon-graphic products, is necessary for achieving future sustainable development goals ([PERSON], 2012). The acquisition methodologies can be terrestrial or aerial. While the use of Uncrewed Aerial Systems (UASs) is efficient for rapid data acquisition and 3D modelling with Aerial Laser Scanning (ALS) or through photo-parametric acquisitions, it is also costly and requires careful survey planning and operator expertise; moreover, the models obtained from aerial surveys often do not guarantee a complete description of the lower part of the trunk of the trees and the underground. This phenomenon is even more limiting in the case of aerial images. At the same time, as regards point clouds, the penetrating power of laser scanning technology allows the acquisition of points of the lower part partially covered by the vegetation above. On the other hand, terrestrial acquisitions can be static, commonly referred to as Terrestrial Laser Scanning (TLS), or mobile, using Mobile Laser Scanning (MLS). Static terrestrial scans can reach an accuracy of less than one centimeter, but at the same time, it is a more expensive technique; moreover, several acquisitions from different observation points are necessary to guarantee a wide distribution of points to describe the object under investigation fully. On the other hand, mobile laser scanners facilitate survey activities. It can work without a Global Navigation Satellite System (GNSS), which enables the use of a mobile laser scanner in environments that do not have satellite coverage; at the same time, the acquired point cloud has a lower point density, a higher noise, and an accuracy at the centimeter level ([PERSON] et al., 2020). Mobile Mapping System (MMS) with a simultaneous localisation and mapping (SLAM) technology has been employed in several forestry studies ([PERSON] et al., 2018; [PERSON] et al., 2021; [PERSON] et al., 2018) conducted in different scenarios and a comparison and performance assessing of several acquisition techniques were also deepened ([PERSON] et al., 2020). However, to the best of our knowledge, no studies have yet been conducted on the accuracy of MMS in particularly challenging scenarios. Moreover, the forest environment can be challenging to detect using a Mobile Mapping System. In fact, in these scenarios, only a few features help the SLAM algorithm improve the alignment ([PERSON] et al., 2021).
Several approaches for an automatic Individual Tree Detection (ITD) have been proposed in the literature. These methods have been developed starting from different types of data (aerial images from drones ([PERSON] et al., 2020; [PERSON] et al., 2020; [PERSON] et al., 2019), helicopters or satellites ([PERSON] et al., 2022; [PERSON] et al., 2018); aerial or terrestrial point clouds ([PERSON] et al., 2021; [PERSON] et al., 2016; [PERSON] et al., 2020), for different scenarios and forest types (confierous or deciduous forests) and based on different approaches (point cloud-based ([PERSON] et al., 2021; [PERSON] et al., 2021; [PERSON] et al., 2020)) or raster-based ([PERSON] et al., 2014; [PERSON] et al., 2018)). Regarding the most common and recent open-source algorithms for ITD, **Table 1** summarizes the most important ones.
In this study, we performed a forest parameter extraction using a TLS-borne fully automated open-source algorithm at a single-tree level with SLAM-based MMS data and compared the results with TLS data. The case study is located in a highly sloped oosterirous forest in the Italian Alps, whose extension is approximately 70 hectares. Moreover, the study area has different tree densities, as the upper portion was thinned out just before the forest fire. Data were processed with the innovative fully-automated open-source Python tool FSC (Forest Structural Complexity Tool) ([PERSON] et al., 2021) developed for high-resolution TLS point clouds. The goal of this contribution is (i) to define whether the use of MMS is suitable for precision forestry purposes in challenging environments; (ii) to evaluate the performance of the FSCT tool on MMS data.
The MMS acquisition was conducted with a KAARTA Stencil 2, while the reference point cloud was acquired with a Riegl VZ 400 terrestrial laser scanner. The output of the processing on the MMS point cloud was validated with respect to the TLS point cloud manually segmented and deepening the accuracy in the Individual Tree Detection (ITD), in the assessment of the Diameter at Breast Height (DBH), and the tree height (H).
## 2 Materials and Methods
The case study (**Figure 1**) is located in a highly sloped coniferous forest in the north-west Italian Alps in the municipality of Mompantero (Turin), 45.162344N, 7.037318E, which was affected by a severe fire in 2017 ([PERSON] et al., 2020; [PERSON] et al., 2023). The extension of the area is approximately 70 hectares. The tree vegetation consists almost solely of dense, even-aged \(P\). _systrestris_ stands, which present different tree densities as the upper portion was thinned out just before the forest fire.
For the purposes of this study, an integrated sensor system based on SLAM (Simultaneous Localisation and Mapping) technology and capable of efficiently acquiring geospatial data was used. The KAARTA Stencil 2 survey system is an integrated mobile mapping platform; it combines a portable laser scanner with a video camera to automatically generate 3D point clouds. The merits of this platform consist in its low cost and light weight, which enhance its imageability. The technology is versatile and can be adapted for use in any environment, particularly in closed and complex spaces or forested areas with limited or absent satellite visibility. The survey system includes a laser scanner, data processor, and camera, allowing for accurate data acquisition while in motion.
The integrated laser scanner has a maximum range of 100 meters, a horizontal and vertical field of view of 360\({}^{\circ}\) and 30\({}^{\circ}\), respectively, and an accuracy of \(\pm\) 30 mm. The feature tracker acquires images at a resolution of 640x360 pixels and a frame rate of 50 Hz. Using LiDAR and IMU data through an odometry and mapping algorithm, the system can produce real-time 3D maps of the surveyed environment. Furthermore, the SLAM algorithm leverages the acquired images to solve the localisation problem, optimise the estimated trajectory, and produce a 3D point cloud of the examined area. The KAARTA Stencil 2 has a tool specifically designed for post-processing acquired data, which can be done at a slower speed than the original acquisition speed. This can help improve point clouds' registration in cases where real-time acquisition may have failed. The software also allows configuration parameter modifications to adapt to specific survey environments. The instrument also includes a Loop Closure tool which uses various functions to improve scan registration and trajectory estimation coherence, correcting global drift errors by matching trajectory paths and enforcing overlap between the initial and ending points.
To evaluate the accuracy of the MMS outcome, five LiDAR scans were acquired with the high-performance terrestrial laser scanner RIEGL VZ-400i. This time-of-flight laser scanner can capture information up to 800 meters. Thanks to its ability to record multiple echoes, it is particularly suitable for use in forest environments, as it can penetrate through the vegetation. The scans were then registered and georeferenced using reflective markers whose position was previously measured through the support of a topographic network.
Figure 1: Study area (EPSG: 32632).
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Tool & Data type & Approach & Language \\ \hline Tree detection & Airborne hyperspectral images and LiDAR data & Raster-based & Python \\ ([PERSON] et al., 2021) & LiDAR point clouds & Raster-based & Python \\ ([PERSON] et al., 2018) & LiDAR & Raster-based & Python \\ \hline FSCT ([PERSON] et al., 2021) & LiDAR (sensor-agnostic) & Point-based & Python \\ (2021) & LiDAR point clouds & Point-based & C++ \\ [PERSON] et al., 2016) & & & \\ \hline Individual & MLS point Tree Extraction & Point-based & Python \\ ([PERSON] et al., 2021) & & & \\ \hline Treesegre ([PERSON] et al., 2018) & LiDAR point clouds & Point-based & C++ \\ \hline \end{tabular}
\end{table}
Table 1: Open-source algorithms developed for Individual Tree Detection.
The workflow of the paper is illustrated in **Figure 2**. Section 3.1 describes the operations adopted during the data acquisition phase; Section 3.2 elaborates on the pre-processing procedures of the point clouds; in Section 3.3, the central data processing is explored; finally, the strategies adopted to validate the results are described in Section 3.4.
### Data collection and pre-processing
The surveyor followed a closed acquisition path to ensure a comprehensive understanding of the area under investigation. The acquisition process began upstream of the area and followed a winding path downstream until reaching the lowest point. The operator then retraced his steps, intersecting the outward route multiple times until reaching the starting point. The acquisition took approximately 13 minutes, covering a trajectory of around 550 meters. Specific configuration parameters were used for data acquisitions optimised for vegetated environments. These settings include values for voxel size (0.4), point cloud resolution in the map file, point cloud resolution for scan matching and display (cornerVoxelSize equal to 0.4m, surfVoxelSize equal to 0.8m, surroundingVoxelSize equal to 0.6m), minimum point-to-point distance for mapping (1 m), and no restrictions on the planarity of motion.
Data were then post-processed with the specific tool, simulating a lower acquisition speed according to an adaptive procedure considering the registration's reliability. The Loop Closure tool optimised the result and ensured that the initial and ending points overlapped. This process took approximately 40 minutes for the first phase and an additional 30 minutes to optimise the result, ultimately generating a point cloud of roughly 160 million points. The resulting point cloud and trajectory are shown in **Figure 3**. A preliminary registration was carried out using the Iterative Closest Point (ICP) algorithm ([PERSON] et al., 2020) available in the software 3D Reshaper, based on pairs of equivalent points identified in both the point cloud and reference model; this procedure allows to harmonise the models from mobile mapping tools with reference data
The reference TLS point cloud was acquired with a resolution of one point every 6 millimetres at a distance of 10 meters, and it consists of approximately 750 million points.
### Data analysis
The KAARTA point cloud thus obtained was subsequently processed through the open-access point cloud processing algorithm F5 CT (Forest Structural Complexity Tool) ([PERSON] et al., 2021). The algorithm was developed using a database based on terrestrial laser-scanned point clouds. Still, it is declared effective with any type of forest point cloud regardless of the data acquisition methodology, as long as a high point density characterises it. In this study, it was decided to use the F5 CT tool (previously introduced in **Table I)** as it is one of the most recent and increasingly popular algorithms; moreover, some studies have already used it to process forest data at single-tree level with encouraging results ([PERSON] et al., 2023).
The F5 CT processing algorithm performs a semantic segmentation of the point cloud with a deep learning technique based on the Pointnet++ architecture; subsequently, the points describing the terrain are used to create a digital terrain model (DTM) used to perform point cloud filtering after segmentation. Then, the point cloud is subdivided into slices clustered using a hierarchical density-based spatial clustering to detect stems and branches whose points are fitted inside cylinders. In the end, the sorting cylinder measurements procedure into individual trees is performed. Please refer to the reference article ([PERSON] et al., 2021) and the 6 thub repository ([[https://github.com/SKrisanski/F5](https://github.com/SKrisanski/F5) CT]([https://github.com/SKrisanski/F5](https://github.com/SKrisanski/F5) CT)) for a more detailed description of the method. This algorithm provides several outputs in addition to the Individual Tree Detection; specifically, we mainly focused on the position, the DBH, and the height of each tree.
The high computational demand necessary for the execution of the algorithm made it necessary to divide the area into five sub-areas, which were subsequently merged again. During the subdivision phase, particular attention was paid to selecting the areas so that the trees on the edge were entirely considered in one of the two areas.
### Data validation
In order to evaluate the performance and accuracy resulting from a forest survey using a mobile mapping system, the point cloud acquired with the KAARTA Stencil 2 system was compared with the reference point cloud. The two products were compared by calculating the Euclidean distance between the points using the 3D data management software 3D Reshaper. The analysis was conducted both on the entire point cloud and a limited portion, filtered to eliminate the points related to the undergrowth and highlight the accuracies on individual trees.
Data validation of the analysis procedure was performed with respect to the TLS point cloud, which has been manually processed according to this workflow: single trees were manually individuated and segmented by visual interpretation of the point cloud; the height of the tree (expressed in terms of the elevation of the ground) was obtained by normalising the point cloud with respect to the elevation of the terrain obtained from the DTM; the single trees were then imported into the commercial 3D Reshaper software and the DBHs were measured using a circular fitting of the cloud points between 1.10 m and 1.50 m. Although the DBH
Figure 3: Point cloud acquired with the KAARTA Stencil 2 (on the right) and trajectory covered (on the left).
Figure 2: Workflow of the paper.
is traditionally calculated at a height of 1.3 m from the ground, it was decided to consider the portion mentioned above of the trunk to include a greater quantity of points and perform a circle fitting with greater reliability. The reference trees were compared with those automatically identified using the FSCT tool. The validation of the point cloud segmentation at the single tree level was performed with respect to the treepo coordinates, carrying out a spatial search for each point and matching them with the closest reference point within a pre-set search radius. Each matched tree's DBH and height values were compared, and the RMSE values were calculated.
## 3 Results
### Data acquisition
**Figure 4** shows the comparison results between the KAARTA cloud and the reference TLS cloud. 70% of the points have an accuracy of less than 6.3 cm, and 85% have an accuracy of less than 12.5 cm. The figure also shows that the points with the least accuracy are mainly located in the area with the most significant forest density.
### ITD and forest parameters
**Figure 5** shows the segmented MLS point cloud. Following validation, 86% of the trees are correctly identified and segmented (166 trees out of 192). Trees
Comparing the automatically estimated values with those obtained from the cloud of reference points, an RMSE relative to the height equal to 6.5 cm and an RMSE relative to the height equal to 3.66 m are obtained (**Table 2**).
## 4 Discussion
The comparison between the KAARTA cloud and the reference cloud acquired with the terrestrial laser scanner highlights the strengths and weaknesses of a mobile mapping system applied in forest environments. The ease of acquisition in terms of time and effort used to carry out the survey, and the type of post-processing required, means that acquisitions of this type can be widely used in any field, even more so in more complex scenarios in which more operator experience is required. The automation of the procedure for identifying forest parameters at the level of a single tree imposes itself as a necessary and fundamental procedure nowadays to be able to make the best use of the type of data obtained with the laser scanner in an effective and efficient manner.
Although the advent of LiDAR technology has led to greater use of point clouds in various research areas such as the one carried out in this study, we must not forget that radiometric information, on the other hand, provide colour information of objects and can be used to detect trees based on their visual appearance. Machine learning algorithms, such as convolutional neural networks (CNNs), can be trained on RGB images to detect trees based on features such as colour, texture, and shape. KAARTA Street2 [2] does not save images (they are only used to support the SLAM algorithm). Still, other commercial solutions can combine point clouds and image data to provide more robust and accurate results for individual tree detection.
The SLAM algorithms integrated into the KAARTA mobile mapping system allow acquisitions to be made in environments where the GPS signal is poor or absent, still obtaining a good point cloud registration; this is subsequently further improved through post-processing based on the Loop Closure tool and on the ICP algorithm. On the contrary, the terrestrial laser scanner requires markers or recognisable points for the registration of the clouds. This makes this type of acquisition less immediate and more time-consuming in the in-situ survey phase.
The F5 CT algorithm is one of the most recent open-source methods developed to automatically process high-density forest point clouds. Even in a complex scenario like the one analysed in this study, the results are outstanding; our of 192 trees, only 26 were not identified (23.5%). From a comparison and visual interpretation of the reference data with the position of the unidentified trees, it is observed that they are mainly located in the lower part of the study area, where no thinning was done, and the forest density is higher. Regardless of the data typology used to generate the cloud, it is widely known that trees of different ages and with different heights present more significant difficulties in the segmentation phase due to multilayering. To this is added the problem intensely discussed in the literature related to the intersection of the crowns of trees when they are located at short distances or in the presence of underground or trees of lower heights and overhang by taller trees. On these aspects, it would be necessary to compare the methodology applied in this study with that of other algorithms to try to identify an approach that can be generic and valid in different forest conditions.
About correctly identified trees, The F5 CT algorithm achieves RMSE values in line with the reference measurements for height (about 3 meters) and diameter (6 centimetres), which represents 37% of the mean height value and 26% of the mean DBH value, respectively. Concerning the height, 37% is high but expected value because the top of the tree is described in a limited way due to the acquisition range of the KAARTA. This problem is discussed in the literature and is mainly solved by integrating aerial data (drone survey). Moreover, even the TLS reference data must be interpreted with caution because, since it is a terrestrial acquisition, theoretically, we cannot be sure that the identified treetop is actually that of the tree without visual interpretation since we do not have aerial or traditionally collected data (i.e., hypsometer). It should be considered that the mapped specimens have a high canopy insert, which is also scattered due to the passage of fire. In general, the KAARTA data tends to underestimate the heights of the reference. About DBH, the average error is smaller, and 6 cm can be considered acceptable in the forestry field; in fact, most trees have DBH between 15 cm and 34 cm. Another aspect to consider is that working on layers and cylinder fittings, the point cloud must be uniformly dense in the vertical development. Consequently, there should be sufficient points on all sides of the trunk to represent an adequate arc of the circumference for fitting. Therefore, the acquisition was made with a crossed path to collect as much information as possible for each trunk, and the finning on the cylinder is realised on 0.4m portion of the trunk (1.1 m-1.5 m). Similarly to what discussed for the height, also in this case DBH in situ measured with traditional techniques is not available.
The cost of KAARTA's ease of use is paid for point cloud accuracy. Despite the post-processing process, which significantly improves the final result, the accuracy of the final output is affected by an error which, as described in subsection 3.1, is less than 6.3 cm for 70% of the points. From this point of view, it should be emphasised that this type of error is also linked to the fact that the two acquisitions are not precisely contemporary and that the thinest branches and leaves are subject to the effect of the wind, which modifies their position. Another problem with SLAM systems is the incorrect registration of clouds, particularly in natural and highly vegetated areas, such as the study area. Even if the data is complete, it may not be correctly modelled by the algorithm. However, thanks to the loop closure and ICP algorithm, this application found no significant misalignments.
The forest typology (species density and morphology) influences the data quality, whether TLS or MMS. Specifically, it must be noticed that the density of trees in the upper part of the study area is low due to thinning in previous years, and the undergrowth is sparse due to the 2017 fire, making manual segmentation activities easier than in the lower part of the area. In the latter, the density is higher and composed of more layers, making manual operations uncertain. These values align with the results obtained
\begin{table}
\begin{tabular}{|c|c|c|} \hline Evaluated parameters & MLS & TLS \\ \hline Time of survey & Reduced times & More extended times \\ \hline Ease of survey & Higher & Lower \\ \hline Pre-processing & Loop Closure and ICP & Topographic survey and registration \\ \hline Accuracy of the point cloud & Lower & Higher \\ \hline Accuracy of the forest parameters & Lower & Higher \\ \hline \end{tabular}
\end{table}
Table 3: Comparison between MML and TLS point clouds.
from the automatic procedures described in the literature and comply with the accuracies required in the forestry sector. **Table 3** summarises the pro and cons of MLS and TLS point clouds.
From this first application of the F5 CT algorithm for ITD and forest parameter estimation, it performs outstandingly despite being developed for denser and different types of data while applied on a sparser MMS point cloud. This analysis is a first step of a more extensive study. More rigorous and in-depth investigations are needed to compare different segmentation and parameter estimation methods in different scenarios based on various data.
## 5 Conclusions
Forest monitoring is a highly debated topic of fundamental importance worldwide. However, traditional techniques are time-consuming and do not allow an efficient estimation of forest parameters on a large scale. This study addresses the use of mobile mapping systems in forestry environments for purposes related to precision forestry in an automatic way. In particular, the validity of MMS systems in a challenging environment, with a high slope, heterogeneous density was investigated.
The point cloud resulting from the acquisition was processed through the F5 CT open-source tool that performs Individual Tree Detection and the estimation of forest parameters at the single tree level, and the results were compared with a manually processed TLS point cloud. Results are promising; the IDT procedure is performed with a success rate of 86.5%. Moreover, the values of the root-mean-square deviation on the height and on the DBH confirm that the use of Mobile Mapping Systems assisted by automatic data processing can be considered as an efficient innovative time-saving approach for monitoring forests.
Nevertheless. further tests are needed to investigate the accuracy of other forest parameters (e.g. biomass) and the quality of the processing algorithm. Moreover, additional considerations should be deepened regarding the detection of the tree in its complete elevation extension and particularly of the treetup using terrestrial instruments.
## References
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (2020). Individual Tree Detection from UAV Imagery Using Holder Exponent. _Remote Sensing_, _12_(15), 2407. [[https://doi.org/10.3390](https://doi.org/10.3390) rs12152407]([https://doi.org/10.3390](https://doi.org/10.3390) rs12152407)
* [PERSON] et al. (2018) [PERSON], [PERSON], & [PERSON] (2018). Extracting individual trees from lidar point clouds using _treeseg. Methods in Ecology and Evolution_, 2041-210X.13121. [[https://doi.org/10.1111/2041-210X.13121](https://doi.org/10.1111/2041-210X.13121)]([https://doi.org/10.1111/2041-210X.13121](https://doi.org/10.1111/2041-210X.13121))
* ICCSA 2020_ (Vol. 12252, pp. 829-844). Springer International Publishing. [[https://doi.org/10.1007/978-3-030-58811-3_59](https://doi.org/10.1007/978-3-030-58811-3_59)]([https://doi.org/10.1007/978-3-030-58811-3_59](https://doi.org/10.1007/978-3-030-58811-3_59))
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (2020). Individual tree detection and species classification of Amazonian palans using UAV images and deep learning. _Forest Ecology and Management_, _475_, 118397. [[https://doi.org/10.1016j.foreco.2020.118397](https://doi.org/10.1016j.foreco.2020.118397)]([https://doi.org/10.1016j.foreco.2020.118397](https://doi.org/10.1016j.foreco.2020.118397))
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (2021). Individual Tree Extraction from Terrestrial LiDAR Point Clouds Based on Transfer Learning and Gaussian Mixture Model Separation. _Remote Sensing_, _13_(2), 223. [[https://doi.org/10.3390/rs13020223](https://doi.org/10.3390/rs13020223)]([https://doi.org/10.3390/rs13020223](https://doi.org/10.3390/rs13020223))
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] [PERSON] (2020). Comparison of Backpack, Handheld, Under-Canopy UAV, and Above-Canopy UAV Laser Scanning for Field Reference Data Collection in Boreal Forests. _Remote Sensing_, _12_(20), 3327. [[https://doi.org/10.3390/rs12203327](https://doi.org/10.3390/rs12203327)]([https://doi.org/10.3390/rs12203327](https://doi.org/10.3390/rs12203327))
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (2021). Forest Structural Complexity Tool--An Open Source, Fully-Automated Tool for Measuring Forest Point Clouds. _Remote Sensing_, _13_(22), 4677. [[https://doi.org/10.3390/rs13224677](https://doi.org/10.3390/rs13224677)]([https://doi.org/10.3390/rs13224677](https://doi.org/10.3390/rs13224677))
* [PERSON] et al. (2022) [PERSON], [PERSON], [PERSON], & [PERSON] (2022). Deep learning-based individual tree crown delineation in mangrove forests using very-high-resolution satellite imagery. _ISPRS Journal of Photogrammetry and Remote Sensing_, _189_, 220-235. [[https://doi.org/10.1016j.isprjsprs.2022.05.002](https://doi.org/10.1016j.isprjsprs.2022.05.002)]([https://doi.org/10.1016j.isprjsprs.2022.05.002](https://doi.org/10.1016j.isprjsprs.2022.05.002))
* [PERSON] et al. (2021) [PERSON], [PERSON], & [PERSON] (2021). A Density-Based Algorithm for the Detection of Individual Trees from LiDAR Data. _Remote Sensing_, _13_(2), 322. [[https://doi.org/10.3390/rs13020322](https://doi.org/10.3390/rs13020322)]([https://doi.org/10.3390/rs13020322](https://doi.org/10.3390/rs13020322))
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (2020). Evaluation of the ICP Algorithm in 3D Point Cloud Registration. _IEEE Access_, \(8\), 68030-68048. [[https://doi.org/10.1109/ACCESS.2020.2986470](https://doi.org/10.1109/ACCESS.2020.2986470)]([https://doi.org/10.1109/ACCESS.2020.2986470](https://doi.org/10.1109/ACCESS.2020.2986470))
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON], [PERSON] (2018). In-situ measurements from mobile platforms: An emerging approach to address the old challenges associated with forest inventories. _ISPRS Journal of Photogrammetry and Remote Sensing_, _143_, 97-107. [[https://doi.org/10.1016j.isprjsprs.2018.04.019](https://doi.org/10.1016j.isprjsprs.2018.04.019)]([https://doi.org/10.1016j.isprjsprs.2018.04.019](https://doi.org/10.1016j.isprjsprs.2018.04.019))
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (2021). Individual tree extraction from urban mobile laser scanning point clouds using deep pointwise direction embedding. _ISPRS Journal of Photogrammetry and Remote Sensing_, _175_, 326-339. [[https://doi.org/10.1016j.isprjsprs.2021.03.002](https://doi.org/10.1016j.isprjsprs.2021.03.002)]([https://doi.org/10.1016j.isprjsprs.2021.03.002](https://doi.org/10.1016j.isprjsprs.2021.03.002))
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (2020). Individual Tree Crown Segmentation of a Larch Plantation Using Airborne Laser Scanning Data Based on Region Growing and Canopy Morphology Features. _Remote Sensing_, _12_(7), 1078. [[https://doi.org/10.3390/rs12071078](https://doi.org/10.3390/rs12071078)]([https://doi.org/10.3390/rs12071078](https://doi.org/10.3390/rs12071078))
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON], [PERSON] (2021). Tree species classification from airborne hyperspectral and LiDAR data using 3D convolutional neural networks. _Remote Sensing of Environment_, _256_, 112322. [[https://doi.org/10.1016j.rse.2021.112322](https://doi.org/10.1016j.rse.2021.112322)]([https://doi.org/10.1016j.rse.2021.112322](https://doi.org/10.1016j.rse.2021.112322))
* [PERSON].
[PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (2021). Novel low-cost mobile mapping systems for forest inventories as terrestrial laser scanning alternatives. _International Journal of Applied Earth Observation and Geoinformation, 104_, 102512. [[https://doi.org/10.1016/j.jag.2021.102512](https://doi.org/10.1016/j.jag.2021.102512)]([https://doi.org/10.1016/j.jag.2021.102512](https://doi.org/10.1016/j.jag.2021.102512))
* [PERSON] et al. (2018) [PERSON], [PERSON], & [PERSON] (2018). Mapping forests using an unmanned ground vehicle with 3D LiDAR and graph-SLAM. _Computers and Electronics in Agriculture, 145_, 217-225. [[https://doi.org/10.1016/j.compag.2017.12.034](https://doi.org/10.1016/j.compag.2017.12.034)]([https://doi.org/10.1016/j.compag.2017.12.034](https://doi.org/10.1016/j.compag.2017.12.034))
* [PERSON] (2012) [PERSON] (2012). Laser Scanner Applications in Forest and Environmental Sciences. _Italian Journal of Remote Sensing_, 109-123. [[https://doi.org/10.5721/IURS20124419](https://doi.org/10.5721/IURS20124419)]([https://doi.org/10.5721/IURS20124419](https://doi.org/10.5721/IURS20124419))
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (2019). Assessment of CNN-Based Methods for Individual Tree Detection on Images Captured by RGB Cameras Attached to UAVs. _Sensors, 19_(16), 3595. [[https://doi.org/10.3390/s19163595](https://doi.org/10.3390/s19163595)]([https://doi.org/10.3390/s19163595](https://doi.org/10.3390/s19163595))
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON], & [PERSON] (2016). Bottom-up delineation of individual trees from full-waveform airborne laser scans in a structurally complex eucalypt forest. _Remote Sensing of Environment, 173_, 69-83. [[https://doi.org/10.1016/j.res.2015.11.008](https://doi.org/10.1016/j.res.2015.11.008)]([https://doi.org/10.1016/j.res.2015.11.008](https://doi.org/10.1016/j.res.2015.11.008))
* [PERSON] et al. (2005) [PERSON], [PERSON], & [PERSON] (2005). Sustainable forest management: Global trends and opportunities. _Forest Policy and Economics_, 74), 551-561. [[https://doi.org/10.1016/j.forpol.2003.09.003](https://doi.org/10.1016/j.forpol.2003.09.003)]([https://doi.org/10.1016/j.forpol.2003.09.003](https://doi.org/10.1016/j.forpol.2003.09.003))
* A Mixed Forests Showcase in Spain. _Remote Sensing_, _15_(5), 1169. [[https://doi.org/10.3390/rs15051169](https://doi.org/10.3390/rs15051169)]([https://doi.org/10.3390/rs15051169](https://doi.org/10.3390/rs15051169))
* [PERSON] et al. (2023) [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], & [PERSON] [PERSON] (2023). Mapping Post-fire Monthly Erosion Rates at the Catchment Scale Using Empirical Models Implemented in GIS. A Case Study in Northern Italy. In [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (Eds.), _Progress in Landsible Research and Technology, Volume I Issue 1_, _2022_ (pp. 99-112). Springer International Publishing. [[https://doi.org/10.1007/978-3-031-16898-7_6](https://doi.org/10.1007/978-3-031-16898-7_6)]([https://doi.org/10.1007/978-3-031-16898-7_6](https://doi.org/10.1007/978-3-031-16898-7_6))
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON], [PERSON] (2018). Individual tree crown delineation in a highly diverse tropical forest using very high resolution satellite images. _ISPR Journal of Photogrammetry and Remote Sensing, 145_, 362-377. [[https://doi.org/10.1016/j.isprisps.2018.09.013](https://doi.org/10.1016/j.isprisps.2018.09.013)]([https://doi.org/10.1016/j.isprisps.2018.09.013](https://doi.org/10.1016/j.isprisps.2018.09.013))
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (2016). Automatic Forest Mapping at Individual Tree Levels from Terrestrial Laser Scanning Point Clouds with a Hierarchical Minimum Cut Method. _Remote Sensing_, _8_(5), 372. [[https://doi.org/10.3390/rs8050372](https://doi.org/10.3390/rs8050372)]([https://doi.org/10.3390/rs8050372](https://doi.org/10.3390/rs8050372))
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (2020). An Individual Tree Segmentation Method Based on Watershed Algorithm and Three-Dimensional Spatial Distribution Analysis From Airborne LiDAR Point Clouds. _IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 13_, 1055-1067. [[https://doi.org/10.1109/JSTARS.2020.2979369](https://doi.org/10.1109/JSTARS.2020.2979369)]([https://doi.org/10.1109/JSTARS.2020.2979369](https://doi.org/10.1109/JSTARS.2020.2979369))
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON], & [PERSON] (2014). Individual tree segmentation over large areas using airborne LiDAR point cloud and very high resolution optical imagery. _2014 IEEE Geoscience and Remote Sensing Symposium_, 800-803. [[https://doi.org/10.1109/IGARSS.2014.6946545](https://doi.org/10.1109/IGARSS.2014.6946545)]([https://doi.org/10.1109/IGARSS.2014.6946545](https://doi.org/10.1109/IGARSS.2014.6946545))
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], & [PERSON] (2018). _PyCrown--Fast raster-based individual tree segmentation for LiDAR data_. Landcare Research NZ Ltd. [[https://doi.org/10.7931/MOSR-DN55](https://doi.org/10.7931/MOSR-DN55)]([https://doi.org/10.7931/MOSR-DN55](https://doi.org/10.7931/MOSR-DN55))
|
isprs
|
A FULLY AUTOMATIC FOREST PARAMETERS EXTRACTION AT SINGLE-TREE LEVEL: A COMPARISON OF MLS AND TLS APPLICATIONS
|
C. Spadavecchia, E. Belcore, N. Grasso, M. Piras
|
https://doi.org/10.5194/isprs-archives-xlviii-1-w1-2023-457-2023
| 2,023
|
CC-BY
|
isprs/eadc3b38_ab0c_4cd3_a410_004a587acc98.md
|
# Evaluating Stereo DTM Quality at Jezero Crater, Mars with HRSC, CTX, and HRISE Images
[PERSON]
Corresponding author
[PERSON]
Astrogeology Science Center, U.S. Geological Survey, 2255 N. Gemini Dr., Flagstaff AZ 86001 ([EMAIL_ADDRESS])
[PERSON]
Astrogeology Science Center, U.S. Geological Survey, 2255 N. Gemini Dr., Flagstaff AZ 86001 ([EMAIL_ADDRESS])
[PERSON]
Astrogeology Science Center, U.S. Geological Survey, 2255 N. Gemini Dr., Flagstaff AZ 86001 ([EMAIL_ADDRESS])
E. Smith
Astrogeology Science Center, U.S. Geological Survey, 2255 N. Gemini Dr., Flagstaff AZ 86001 ([EMAIL_ADDRESS])
[PERSON]
Astrogeology Science Center, U.S. Geological Survey, 2255 N. Gemini Dr., Flagstaff AZ 86001 ([EMAIL_ADDRESS])
[PERSON] Hare
Astrogeology Science Center, U.S. Geological Survey, 2255 N. Gemini Dr., Flagstaff AZ 86001 ([EMAIL_ADDRESS])
K. Gwinner
Astrogeology Science Center, U.S. Geological Survey, 2255 N. Gemini Dr., Flagstaff AZ 86001 ([EMAIL_ADDRESS])
J. A. B. C.
included more rugged (and scientifically interesting) adjacent terrain as well. DTMs for the _Curiosity_ landing site in Gale crater ([PERSON] et al., 2012) extended onto the very rugged flank of Aeolus Mons. We previously used these data to investigate the resolution and precision of HRSC DTMs made at both the German Aerospace Center (DLR) and the U.S. Geological Survey (USGS) ([PERSON] et al., 2011, 2017; 2018). Unfortunately, as summarized below, the process by which the Gale data were collected led to concerns about both the size of the usable study area and the accuracy with which the reference data could be registered to the target DTMs. In this paper, we apply the same method of analysis to the _Perseverance_ landing site in Jezero crater ([PERSON] et al., 2019; 2020), where the coverage of suitable terrain is larger and all data have been registered to known, high accuracy. Our results based on HiRISE reference data are corroborated by independent approaches to estimating resolution similar to those of [PERSON] et al. (2007).
## 2 Source Data
### Gale
Mapping of Gale crater eventually included more than a dozen HiRISE stereopairs at 25 cm/pixel, covering the full landing ellipse and a substantial area of Aeolus Mons (also known informally as \"Mount Sharp\"; [PERSON] et al., 2012). The first of these pairs containing rugged terrain was designated Traverse 1 (or T1) and consists of images spp 009149 -1750 and spp 009249 -1750. A 15 x 6.5 km study area (latitude +4.92\({}^{\circ}\) to -4.67\({}^{\circ}\)N, longitude 137.35\({}^{\circ}\) to 137.46\({}^{\circ}\)E) within this DTM was used by [PERSON] et al. (2011) to assess an early multi-orbit HRSC DTM mosaic from DLR ([PERSON] et al., 2010a). Subsequent comparisons ([PERSON] et al., 2017, 2018) used the same HiRISE data and DTMs produced by DLR and USGS from images h4235 0001 _xx2 (xx = nd2, s12, s22), which has superior signal to noise ratio (SNR). The Level 2 (idenometrically calibrated) images are available from the NASA Planetary Data System (PDS). The DLR DTM is the standard Level 4 single-strip controlled DTM h4235 0001 _d4, also in the PDS.
### Jezero
To support landing site selection, planning, and onboard navigation during landing for Mars 2020, the USGS produced DTMs from multiple HiRISE and CTX stereopairs, then core-istered them and made DTM mosa as summarized below ([PERSON] et al., 2019; 2020). We used the mosaics rather than individual DTMs for this paper. The study area is defined by the HiRISE coverage, centered on the Jezero delta near latitude 18.49\({}^{\circ}\)N, longitude 77.41\({}^{\circ}\)E. The data cover about 290 km\({}^{2}\) within a 20 x 20 km region, five times the area studied at Gale. The HRSC product from DLR is an unreleased multi-orbit mosaic prepared for the Mars 2020 project, based on a subset of the available HRSC coverage for quadrangle MC-13E. A Level 5 (multi-orbit) product covering the entire quadrangle according to the standards described in [PERSON] et al. (2016) is being prepared for PDS release. Level 2 images h5270 0000 _xx2 were used to produce the USGS DTM as described below.
## 3 Mapping Methodologies
### DLR: HRSC Team Pipeline
The HRSC processing pipeline used at DLR is entirely automated and designed to take full advantage of the multi-line scanner geometry of the images. Products are controlled to MOLA by a sequential photogrammetric adjustment ([PERSON] et al., 2010b). The Gale DTM is a Level 4 product, derived from a single-orbit image set as described by [PERSON] et al. (2009). Production of multi-orbit Level 5 products such as that used at Jezero is described by [PERSON] et al. (2016). Dense image matching described in detail by [PERSON] et al. (2009). The images are filtered to reduce noise and compression artefacts, and orthorectified to reduce scale errors and parallax distortions. Area-based matching is applied, consisting of normalized cross-correlation followed by sub-pixel refinement by adaptive least squares. Matching is performed at a \"pyramid\" of resolution levels, because image quality usually varies within a single image strip (hundreds of kilometers long) for HRSC, depending on atmospheric and illumination conditions. Points from different resolution levels are filtered separately, then combined by weighted interpolation. This procedure improves matching performance in areas of poorer image quality and thus reduces the occurrence of matching gaps, at the price of a small reduction of point precision in areas with higher image quality. Ground coordinates are computed by multi-ray intersection based on all available images, which provides information to eliminate bad matching results.
### USGS: Societ Set
Production of DTMs at the USGS uses the same software for all image types. The open-source ISIS3 system developed by the USGS ([PERSON] et al., 2017) is used for data preparation and commercial stereo software (SOCET SET & from BAE Systems; Miller and Walker, 1993; 1995) is employed for control and DTM generation. BAE has since introduced SOCET GXP as the successor to SOCET SET, but it uses the same adjustment and matching software so our results will apply to it also. ISIS3 was used to format output products for delivery, and for much of the analysis described below. [PERSON] et al. (2008) describe the mapping procedure for HiRISE in detail; that for CTX is similar. [PERSON] et al (2017, 2018) describe the procedures for HRSC in detail. They also describe a more efficient approach to generating ground control that was applied to the Jezero HiRISE and CTX data. The differences, and the subsequent processing done to refine and assess the geometric registration of the Jezero products ([PERSON] et al., 2019; 2020) are relevant to the reliability of our DTM-to-DTM comparisons, so we summarize them here.
At Gale, HiRISE (and CTX) DTMs were produced individually over a period of several years. They were controlled by using the older procedure ([PERSON] et al., 2008), in which ground control points were measured interactively. A small number of points in level areas were constrained in elevation, as interpolated from the MOLA data. An even smaller number of points on features common to the MOLA and image data were constrained in all three dimensions. Accuracy of the control was therefore limited by the sample and grid spacing of MOLA. [PERSON] et al. (2011) investigated the consistency of the overlapping HiRISE DTMs and found offsets on the order of 100 m horizontally and 10 m vertically. In addition, it was discovered late in the process that geometric distortions for CTX were not corrected properly, resulting in horizontal and vertical distortions of tens of meters. The CTX products were therefore adjusted horizontally to match the DLR HRSC data by \"rubber sheet\" transformation based on interactively measured ties, then the HiRISE products were warped to match CTX. This process left the horizontal accuracy of registration uncertain, but likely at the few-pixel level given the use of interactive measurements. Elevation differences were minimized by least-squares adjustment of vertical offsets to the DTMs, but mismatches of up to 10 m vertically remained as a result of uncorrected tilts ([PERSON] et al., 2011). These seams in the DTM mosaic led to the decision to restrict analysis to the single TI HiRISE DTM.
The USGS HRSC DTM was produced later ([PERSON] et al., 2017) and controlled separately by as outlined below. [PERSON] et al. (2017) found this DTM to be slightly misaligned with the earlier products and shifted it to align it (by eye) with the HiRISE data. This process yielded single-pixel registration accuracy at best. The sparsity of small topographic features made it difficult to assess distortions that the warping process could have introduced into the reference DTM.
At Jezero, the new control procedures described by [PERSON] et al. (2017, 2018) were applied to HiRISE and CTX as well as HRSC images. In this process, a sparse set of tiepoints (but still substantially denser than the interactive measurements used in the past) is created and fitted to MOLA by using the point-cloud fitting application _pc align_ of the Ames Stereo Pipeline (ASP; [PERSON] et al., 2010). These points are then converted to ground control points and constrained to their fitted locations. To further improve the consistency of registration, the individual CTX DTMs were adjusted with _pc align_ to match the HRSC base ([PERSON] et al., 2019, 2020). Pseudo-ground-control points for the HiRISE images were then generated by fitting clouds of tiepoints to the mosaicked CTX DTM. All HiRISE images were then adjusted in a bundle based on this ground control plus image-to-image and pair-to-pair tiepoints. Finished HiRISE DTMs were further adjusted by fitting to the CTX base with _pc align_. The orthonoxises were transformed along with the DTMs, and the horizontal registration within and between datasets was assessed by matching the images with IMCORR ([[https://nside.org/data/valmap/incomort.html](https://nside.org/data/valmap/incomort.html)]([https://nside.org/data/valmap/incomort.html](https://nside.org/data/valmap/incomort.html))). The median of the spatially resolved offsets horizontally were 30 m for CTX to DLR Level 5 and 2 m for HiRISE to CTX ([PERSON] et al., 2019, 2020). The USGS HRSC DTM was controlled to the CTX base by using _pc align_. It was fitted to the CTX data once complete but no orthonique was produced. Its alignment with HiRISE DTM mosaic was therefore checked as part of the DTM comparison process described below and found to be accurate to 18 m. Thus, all datasets are aligned horizontally with fractional-post precision. Vertical discrepancies are also small but are unimportant since we focus on the dispersion in elevation differences rather than the mean.
DTMs were produced by using the Next Generation Automatic Terrain Extraction (NGATE) module ([PERSON], 2006; [PERSON] et al., 2006). Regardless of the ground sample distance (GSD) selected for the output, this software performs both area-based and feature-based matching \"at every pixel\" (actually on a grid with spacing equal to the mean image GSD). Each possible pair of images is matched separately but the results are not combined in a multi-ray intersection calculation as for the HRSC pipeline. Instead, the multiple matching results (area- and feature-based, different image combinations, at multiple closely spaced points) are combined by robust filtering to estimate the elevation of a post in the output DTM. In our experience, this algorithm does a good job of finding the ground surface, but tends to produce a DTM that appears \"blocky\" on close examination ([PERSON] et al., 2008, their Fig. 18); both feature-based matching and nonlinear filtering would be expected to yield rather sharp jumps between small regions of the DTM. We therefore refine the NGATE DTMs by performing one pass of the older, area-based matcher Adaptive Automatic Terrain Extraction (AATE; [PERSON] and [PERSON], 1997, which smooths the DTM slightly but is more likely to maintain consistency with the images than a simple lowpass filter. SOCT SET provides tools for interactive editing, but no editing was done on the DTMs used in this study.
## 4 Quality Assessment
### Comparison to HiRISE DTMs
At both Gale and Jezero, our primary approach to quality assessment was to compare DTMs post-by-post with smoothed HiRISE data. This process began with downsampling the HiRISE DTM mosaic to the appropriate GSD and reprojecting it to match the target DTM. We then smoothed the reprojected HiRISE DTM with boxcar lowpass filters of 3 x 3, 5 x 5, etc., posts, and measured the root mean square (RMS) difference between the target and each of these smoothed products. Interpolation of the difference at odd-integer filter sizes then yielded the filter size at which HiRISE best fit the target (a measure of resolution) and the minimum RMS difference (a measure of vertical precision).
Our results appear in Table 1. Filter width is expressed in DTM posts, meters, and image pixels; vertical precision is given in meters and also converted to RMS matching error \(\rho\) in pixels according to the equation EP = \(\rho\) GSD / (pH) where EP is the expected vertical precision (equated with the measured error) and p/h is the parallax-height ratio ([PERSON] et al., 2015). Normalizing the results by GSD and p/h in this way allows us to compare matcher performance for the different cameras, as shown in Figure 1. We find that the results are consistent when the HRSC stereo channel GSD (rather than the radii GSD, which is a factor of 2 smaller) is used. This is reasonable given that the stereo channels contribute the most parallax. Results (both precision and resolution) for the same camera and processing are consistent at roughly the 15% level between sites. The USGS products have significantly larger errors and better (smaller) apparent resolution than the DLR DTMs. The CTX results resemble those for the DLR DTMs (better precision and poorer resolution) despite being generated with the same software as the HRSC USGS DTMs.
Given the concerns described above about internal distortions and pixel-level misregistrations of the Gale DTMs, we investigated the sensitivity of the results to registration errors. We repeated the analysis with the HRSC DTM offset by one and two pixels from its nominal position in each cardinal direction. The RMS error at optimal smoothing showed a smooth minimum with respect to these offsets. Interpolating, we found that the best-fit position was 6.1 m west and 17.5 m north of the nominal location, for an overall error of 18.5 m (0.37 post). Fits to the results also yielded estimates of sensitivity of the quality factors to small misalignments. A shift of one pixel increases both the optimal smoothing and the RMS error by \(\leq\)9%; two pixels of misregistration increases the values by \(\leq\)20%. Thus, registration errors of one or a few pixels between our Gale DTMs may have led us to overestimate smoothing and precision by no more than 10-20%, consistent with the \(\sim\)15% agreement between the Gale and Jezero results.
Given that the \"rules of thumb\" for these quantities (discussed below) are usually expressed with 50% uncertainty, the variation is not significant.
An important question is whether these results are predictive for the quality of other DTMs made with images from the same camera and analyzed with the same software, or whether they are somewhat specific to the terrains studied. The consistency between the Gale and Jezero results supports the former expectation, but the diversity of terrains within the Jezero study area provides an opportunity to test it. We therefore analyzed subarcas smoother and rougher than the average. The smooth region (east of 77.45' longitude) lies entirely on the Jezero crater floor east of the delta. The rough region (west of 77.35') consists primarily of the crater rim. Table 2 gives the RMS adirectional slope for these areas and the full HRISE DTM, measured over 50 m baselines, along with the optimal smoothing filter width and matching error (both in pixels). Also shown is a measure of the sensitivity of the error to the filter width (the change in filter width needed to increase the error by 1% from the minimum). Results for the DLR Level 5 DTM are nearly independent of terrain. Matching errors increase with roughness for the DTMs produced in SOCET SET, but much less than proportionately; the logarithmic derivatives (Table 2) are less than 0.5. The smoothness increases with roughness (again, much less than proportionately) for HRSC but, oddly, not for CTX. The sensitivity of the smoothing estimate is also nearly constant for CTX but decreases with roughness for both HRSC products.
### Slopes as a Function of Baseline
A fundamentally different approach to quantifying DTM resolution is to plot the RMS slope against the horizontal baseline over which it is evaluated. Slopes at every baseline value can be calculated efficiently by using Fast Fourier Transform methods, and such baseline-slope curves have been an important tool in landing site selection for numerous missions (e.g., [PERSON] et al., 2008; [PERSON] et al., 2012). As shown in Figure 2a, the slope curves can contain information about the data collection process as well as the surface topography. This approach addresses a wide range of spatial scales and does not require a reference DTM; resolution can often be inferred from a break in the derivative of the baseline-slope curve. A reference is nonetheless useful as an indication of the true surface slope behavior, so that only deviations from this are interpreted as data effects. The main drawback of looking at baseline-slope curves is that the results can be ambiguous. For example, an upturn of the curve at short baselines due to localized artefacts (noise) in the DTM can potentially mask the leveling out that is due to limited resolution. Figure 2b (modified from [PERSON] et al., 2018) shows the curves for slopes along north-south baselines on Aeolis Mons, averaged over a 3.75 x 12.8 km area. The DLR DTM is increasingly deficient (relative to HRISE) in slopes at baselines shorter than about 1500 m. The \"knee\" in the curve can be estimated, somewhat subjectively, by drawing tangents to the curve and locating their intersection, at about 560 m. The curve from the USGS DTM agrees closely with that from HiRISE (though the gap between curves widens at \(\sim\)700 baseline) making it difficult to draw conclusion about the resolution of the USGS HRSC DTM.
Figure 2c shows north-south slopes averaged over a 3.35 x 12.8 km area on the rim of Jezero crater. Results for CTX are not shown because the area analyzed is slightly different as a consequence of the different GSDs and the requirement that the Fourier transform length be a power of 2 posts. As at Gale, the curve for the DLR DTM starts to depart from the HiRISE curve around 1500 m baseline and is flat for short baselines, with a \"knee\" around 490 m. The ratio of this transitional baseline to the optimal filter width is thus similar for both study areas. What is remarkable is how closely the slope curve for the optimally smoothed HiRISE DTM follows that for the HRSC data.
Slopes for the USGS HRSC DTM at Jezero significantly exceed those for HiRISE, indicating that the USGS DTM is noisy at baselines \(\leq\)1000 m. Though the HRRC DTM agrees best with HiRISE in elevation when the latter model is smoothed (Table 1), such smoothing will clearly _increase_ the discrepancy in slopes. We are thus led to consider smoothing the USGS HRSC DTM, trading reduction of noise for (further) reduced resolution. Using a 9 x 9 lowpass filter yields a DTM very similar in appearance and quality to the HRSC product. It matches smoothed HiRISE data with an optimal filter width of 19.5 pixels (525 m), yielding an RMS error of 9.54 m (corresponding to a matching precision
\begin{table}
\begin{tabular}{|l|l|c|c|c|c|} \hline & & \multicolumn{3}{c|}{**Region**} & \multicolumn{2}{c|}{**d log param**} \\
**Dataset** & **Parameter** & **East** & **All** & **West** & **d log slope** \\ \hline HiRISE & RMS slope & 3.38 & 7.09 & 11.29 & \multicolumn{1}{c|}{} \\ \hline \multirow{3}{*}{HRSC DLR} & Filter width & 20.58 & 20.92 & 20.84 & 0.010 \\ & Sensitivity & 7.94 & 5.00 & 3.21 & -0.750 \\ & Match error & 0.229 & 0.242 & 0.255 & 0.090 \\ \hline \multirow{3}{*}{HRSC USGS} & Filter width & 16.20 & 11.34 & 13.30 & -0.299 \\ & Sensitivity & 8.92 & 5.00 & 4.03 & -0.661 \\ \cline{1-1} & Match error & 0.289 & 0.326 & 0.395 & 0.261 \\ \hline \multirow{3}{*}{CTX} & Filter width & 18.34 & 18.40 & 18.46 & 0.006 \\ \cline{1-1} & Sensitivity & 6.81 & 7.11 & 7.90 & 0.123 \\ \cline{1-1} & Match error & 0.201 & 0.266 & 0.345 & 0.450 \\ \hline \end{tabular}
\end{table}
Table 2: DTM precision and resolution as a function of surface slope. Best-fit filter width and matching precision in pixels as in Table 1. “Sensitivity” is a measure of the relative width of the minimum in error vs. filter size.
Figure 1: Estimated matching error of target DTMs as a function of smoothing of reference DTM. Values for odd-integer filter widths are connected by a smooth curve; cross indicates the interpolated minimum error at best-fit width.
of 0.243 pixels). The slope spectrum for this smoothed version of the USGS DTM agrees closely with the DLR (and appropriately smoothed HRISE) data. On the other hand, applying a 5 x 5 filter to the HRSC DTM yields baseline-slope behavior similar to that of the full-resolution HiRISE data. Unfortunately, this agreement is partly fortuitous, since the slopes in the HiRISE model come entirely from real topography, whereas those in the HRSC DTM combine the effects of (smoothed) topography and errors.
### Qualitative and Semiquantitative Assessments
Visual examination of the original and smoothed DTMs sheds additional light on the quantitative comparisons described in the previous section. Presenting the DTMs in the form of synthetic shaded relief images (Figure 3) emphasizes the local texture and details in which we are primarily interested. Examining the DTMs directly (as grayscale or color-coded images) leads to essentially the same conclusions. The inexable first impression is that the target DTMs are substantially less detailed than the HiRISE data downsampled to the same GSD. The CTX and DLR Level 5 HRSC DTMs each resemble their respective optimally smoothed HiRISE product quite closely. In particular, both the target and smoothed HiRISE DTMs appear homogeneous, with similar scales of detail in both the rugged western and smooth eastern areas. Close comparison shows that surfaces in the target DTMs appear slightly rougher than in the smoothed HiRISE data. This difference is particularly noticeable for CTX.
The appearance of the HRSC USGS DTM is more complex. It contains features of similar size to those in the smoothed HiRISE reference, but also numerous smaller bumps and hollows. Regardless of their dimensions, many of these variations appear to be spurious (topographic \"noise\") but some correspond to real surface features. These include some small features which are seen in the unsmoothed but not the smoothed HiRISE data. Some larger real features appear \"broken up\" into clusters of small humps in the HRSC DTM. A few relatively steep slopes (e.g., in the channel walls) appear to be resolved relatively accurately at widths as small as 150 m. On the other hand, the prominent 900-m diameter impact crater in the center of the delta, which is subdued but visible in the DLR DTM, is almost entirely absent. All of these effects were also observed in the Gale crater T1 area ([PERSON] et al., 2017, 2018). The greater effective smoothness and reduced amplitude of errors in smoother terrain that we inferred from quantitative comparisons is not readily apparent to the eye.
The errors in the target DTMs take the form almost entirely of compact fluctuations (bumps and hollows) either at about the size of the best fit smoothing filter or. for the HRSC USGS DTM, at this size and smaller. Spikes or pits of amplitude exceeding local relief (indicating matching blunders) were not observed. Neither were gaps in the data caused by a lack of matched points, but surprisingly large features (e.g., the 900-m crater) were almost entirely missed. The prominent pattern of correlated errors over small rectangular areas seen with the SOCCET SET AATE matcher ([PERSON] et al., 2003b) was not observed in the DTMs produced with NGATE and smoothed with one AATE pass.
[PERSON] et al. (2007) measured a quantity related to DTM resolution by counting impact craters visible in the shaded relief and in the orthoimage. The crater diameter at which half the craters identified in the image could be seen in the shaded DTM varied from 2 to 5 km for different processing approaches. The stereo channel GSD was 28 m, similar to our image set. We were unable to make useful crater events because the small area of the Jezero study area set by the HiRISE coverage contains very few craters visible in the HRSC DTMs. This may be a consequence not only of the horizontal resolution but also of the vertical precision of the DTMs, the images show that the majority of craters present are degraded, with floors filled to nearly the surrounding level. Nevertheless, the few craters visible, along with some small knobs, provide bounds on the equivalent measure of resolution. The CTX DTM contains more craters, but rather than counting them we made a subjective assessment of the smallest craters and knobs that are reliably present.
The HRSC USGS DTM contains only one clear impact crater, with diameter 1800 m (just outside Jezero, hence not seen in Figure 3). Its rim is well resolved. As noted, the 900-m crater is not visible. Knobs 500 m across seen in the orthoimage are generally identifiable in the shaded relief (though many similar appearing bumps do not correspond to real features). Knobs 300 m across are occasionally present, those 200 m and smaller are generally not. In the HRSC DLR DTM, the 1800-m crater is visible, as is a 1600-m crater beyond the edge of the USGS coverage. The 900-m crater is visible but indistinct. Knobs as small as 600 m across are visible; a few real knobs smaller than this can also be identified but appear to be \(\sim\)600 m wide. In the CTX data, a great many small craters are seen in the shaded relief, but even more in the images. Some craters 150 m in diameter are distinguishable from noise in the elevation data, but many craters
Figure 2: Slope as a function of baseline. (a) Schematic effects. (b) Gale data from [PERSON] et al. (2018). (c) Jezero data.
Finally, we examined elevation statistics for flat areas of the Jezero crater floor to make crude estimates of the vertical precision of the DTMs, independent of a reference model. The images and HRISE DTM show areas that are topographically smooth yet have abundant image texture as a result of albedo variations. The largest such rectangular area in the HRSC DTM was 7.5 x 3 km. The standard deviation of elevations in this box was 11.6 m for the DLR dataset and 12.2 m for USGS. In the CTX DTM, real topographic variations are apparent and the largest featureless box we could identify was 1 x 1 km. The elevation standard deviation was 2.64 m. If these elevation dispersions are attributed to matching error, the corresponding matching precisions are 0.296, 0.309, and 0.199 pixel, in good agreement with the results for the smooth area (Table 2).
## 5 Discussion
The results presented here for vertical and image-matching precision are not unexpected. Matching precision on the order of
Figure 3: Shaded relief portrayal of Jezero DTMs. (a) HRISE downsampled to 20 m/post. (b) CTX at 20 m/post. (c) HRISE at 20 m, smoothed 5 x 5 to match CTX (see Table 1). (d) HRSC DLR Level 5 at 50 m/post. (e) HRISE at 50 m/post smoothed 11 x 11. (f) HRSC USGS at 50 m/post. (g) HRISE at 50 m/post, smoothed 7 x 7. All images 9 x 4 km, centered at 77.4\({}^{\rm E}\), 18.5\({}^{\rm N}\), Simple Cylindrical projection, north at top, illuminated from left with identical contrast stretch. HRSC products have been enlarged to aid comparison. Squares in panels (c), (e), (g) indicate the size of smoothing filter applied to HRISE to match target DTM resolution. this size are not seen. At 300 m diameter most craters are visible. A few knobs as small as 50-70 m across are visible but appear smoothed with large texture as a result of albedo broadened to \(\sim\)100 m. Thus, small features are blurred to about the scale of the optimal smoothing filter estimated above, and craters are sometimes visible at diameters 1.5x this smoothing width, but reliably visible at about 3x this size. Resolution as measured by crater visibility thus seems to lie at the low end of the range found by [PERSON] et al. (2007). This is unsurprising given that significant improvements have been made to both the DLR and USGS processing techniques in the interim ([PERSON] et al., 2016; [PERSON] et al., 2016).
0.2 pixel has long been a rule of thumb for predicting vertical precision (e.g., [PERSON] et al., 1996). We have studied matching precision in SOCET SET with a variety of approaches applied to images from very different cameras ([PERSON] et al., 1999; 2003b; 2008; 2017) as well as simulated images ([PERSON] et al., 2016). Finding values in the range 0.2-0.3 pixel (as here) we have recommended this range as a rule of thumb for predicting vertical precision for particular images, and for designing stereo cameras and observations to meet some required precision.
Our results for horizontal resolution are somewhat surprising, though consistent with [PERSON] et al. (2016; 2017; 2018). Because we have used such disparate approaches to try to quantify the horizontal resolution of the target DTMs, it is appropriate to discuss what is meant by the term (we exclude at the outset the erroneous usage of resolution as a synonym for pixel size or GSD) and how the different measures are expected to compare. In contemporary usage (e.g., [PERSON] et al., 1980), resolution is usually defined as the size of a minimally discernable gap between features, or a _line_. (Earlier usage referred instead to the separation between centers of features, equal to a line _pair_.)
Smoothing with a lowpass boxcar filter will attenuate signals with a wavelength equal to the filter width while passing longer-wavelength signals. Although the peak-to-peak wavelength is equivalent to the separation rather than the gap between features (i.e., a line pair), it must be bigger than the filter width to be visible. Thus, the width of the smallest resolvable line or other feature is likely to be close to the filter width, and this is what we observe in comparing the broadened appearance of small knobs to the amount of smoothing inferred from comparing target and reference DTMs. Area-based image matching is a far more complex process than filtering, but to the extent that a finite matching patch is used, the patch width places a limit on resolution that can be equated to a resolvable line. Matchers are also known to display more complex algorithm- and scene-dependent behavior that can introduce artefacts at other spatial frequencies, but these effects are beyond the scope of our paper.
Slopes are computed for a discrete baseline, and should be just detectable when that baseline extends from (say) the top of a local high to the floor of a barely resolved adjacent gap; this is equivalent to the resolvable line width. \"Resolution\" defined by the diameter of an identifiable center requires more thought. At a minimum, the diameter might be equivalent to the width of a line pair, for a simple carterform defined by a high-low-high pattern of elevations. _Recognition_ of a landmark such as a crater is usually taken to require at least 3 to 5 resolution elements. Thus, we would expect detected craters to be at least as large as the optimal smoothing, and possibly two or three times larger. Considered in this light, the various resolution estimates for each Gale and Jezero dataset are mutually consistent. Though our work shows the value of a reference dataset for evaluating DTM quality, this consistency indicates that useful quality estimates can, in some cases, be obtained without such a reference.
Many DTMs are generated with a GSD between 3 and 5 image pixels. (For HRSC team products, GSD is set at about twice the mean image GSD which is often, as here, 4x the radir GSD.) We have offered this as a rule of thumb for DTM design in past work (e.g., [PERSON] et al., 2003; 2008; 2016) and justified it on the grounds that smaller GSD would not increase resolution. The rationale is that, for some matching tools, three pixels is the smallest (odd) patch size that could be used in area-based matching, so posts spaced more closely than this would not provide independent elevation information. Our results indicate that this lower limit is not violated, but also is not approached closely, at least when the average behavior over large regions is considered. The discrepancy is understandable given that matching software may choose patch sizes larger than the minimum possible. Nevertheless, choosing a GSD of 3-5 pixels is a reasonable rule for DTM design, because it is desirable to oversample resolved features, and because the density of good matches (therefore resolution) can be locally greater in areas of optimal image texture. What we do not recommend is to take either 3-5 pixels or the GSD of a DTM as its resolution; the true resolution is in the range of 10-20 pixels in the examples presented here. A more conservative rule of thumb based on this range is thus more appropriate for designing stereo cameras and observations.
Perhaps our most surprising result is the spatially variable quality of the USGS DTMs. Not only do the precision and best-fit smoothing vary with roughness, individual areas contain a mixture of features (and noise) ranging from the size of the best-fit filter to much smaller. These variations make sense in light of the description ([PERSON], 2006; [PERSON] et al., 2006) that the NGATE matcher produces a dense set of candidate matches and then filters them in a robust (nonlinear) way to populate the output DTM. The details are proprietary, but it is plausible that the algorithm applies more smoothing (yielding smaller precisions) when it detects smoother terrain. It seems also to apply a mix of stronger and weaker smoothing locally, resulting in the mix of relief at multiple scales. Why the precision was terrain sensitive for CTX data but the smoothing was not is unclear. The obvious differences between CTX and HRSC images are higher signal-to-noise ratio and smaller GSD. Perhaps higher resolution allows CTX to detect and match localized, high contrast albedo variations, though this might be expected to yield consistently good rather than coarse resolution. The HRSC DLR DTM has more homogeneous noise and resolution, indicating a different and less terrain-sensitive processing approach. The slightly greater smoothness overall also indicates a different tradeoff between precision and resolution. A very similar trade can be made by filtering the USGS DTM after matching, but this does not entirely hide the roughness-dependent behavior.
We conclude with words of warning to the community of planetary DTM users. The true resolving power of stereo DTMs may not be as great as the GSD would suggest. Furthermore, it can vary within a DTM, and small \"features\" may represent a mixture of actual surface detail and artefacts, so that they should be interpreted (if at all) with caution and always in light of the images. Therefore, it is not good practice to simply optimize horizontal resolution without consideration of the effect on vertical error. The achievable trades between horizontal resolution and vertical precision is likely to depend somewhat on image and scene details such as SNR, illumination, and terrain roughness. The optimal trade in any case may be different depending on the application, with slope accuracy prized for landing site selection but detection of fine details perhaps more valuable for some geologic studies or minimization of artefacts for others.
## 6 Future Work
Several directions for future investigation are evident in light of this work. First, the high-quality reference data provided by HiRISE can be used to test and optimize matcher performance. For example, can parameters of the NGATE matcher such as internal smoothing be selected to yield reasonably reliable short-baseline slope estimates for a variety of terrains and illumination conditions? Second, assessing the properties of DTMs produced with other software, such as the Ames Stereo Pipeline ([PERSON] et al., 2010) is of interest. Third, the intrinsic precision and density of points obtained by image matching should be studied separately from the quality of DTMs interpolated from those points. With access to the algorithms and software (necessarily for noncommercial software), both steps might be improved based on such evaluations. Fifth, our quantitative analysis in this paper addresses average properties of a DTM, so derivation of local quality parameters supporting the interpretation of specific features is desirable. Finally, photocolincnometry ([PERSON] et al., 2003a) has been suggested for improving stereo-derived DTMs (e.g., [PERSON] et al., 2006). Comparison with a reference DTM would quantify its effects on resolution and precision.
## Acknowledgements
We gratefully acknowledge the support of the National Aeronautics and Space Agency (NASA) Mars Express Project, the Planetary Geology and Geophysics Cartography program (2005-2015) and the NASA-USGS Interagency Agreement for planetary mapping (2016 on) for the work described here.
## References
* [PERSON] et al. (2015) [PERSON], et al., 2015. Criteria for automatic identification of stereo image pairs. _Lunar Planet. Sci._, 46, 2703.
* [PERSON] et al. (1996) [PERSON], et al., 1996. Clementine imagery: Selenographic coverage for cartographic and scientific use. _Planet. Space Sci._, 44, 1136-1148.
* [PERSON] et al. (2017) [PERSON], et al., 2017. A Stereophotoclinometry model of a physical wall representing asteroid Bennu. _Lunar Planet. Sci._, 48, 1964.
* [PERSON] et al. (2019) [PERSON], et al., 2019. Mars 2020 terrain relative navigation support: Digital terrain model generation and mosaicing process improvement. _4 th Planetary Data Workshop_ (LPI Contrib. 215), 7047.
* [PERSON] et al. (2020) [PERSON], et al., 2020. Mars 2020 terrain relative navigation flight product generation: Digital terrain model and orthorectified image mosaics. _Lunar Planet. Sci._, 51, 2326.
* [PERSON] et al. (2012) [PERSON], et al., 2012. Selection of the Mars Science Laboratory landing site. _Space Science Reviews_, 170, 641-737, doi: 10.1007/s1124-012-9916-y.
* [PERSON] et al. (2009) [PERSON], et al., 2009. Derivation and validation of high-resolution digital terrain models from Mars Express HRSC-data. _Photogram. Eng. Rem. Sens._, 75(9), 1127-1142.
* [PERSON] et al. (2010) [PERSON], et al., 2010a. Regional HRSC multi-orbit digital terrain models for the Mars Science Laboratory candidate landing sites. _Lunar Planet. Sci._, 41, 2727.
* [PERSON] et al. (2010) [PERSON], et al., 2010b. Topography of Mars from global mapping by HRSC high-resolution digital terrain models and orthoimages: Characteristics and performance. _Earth Planet. Sci. Lett._, 294, 506-519.
* [PERSON] et al. (2016) [PERSON], et al., 2016. The High Resolution Stereo Camera (HRSC) of Mars Express and its approach to science analysis and mapping for Mars and its satellites. _Planet. Space Sci._, 126, 93-138.
* [PERSON] et al. (2007) [PERSON], et al., 2007. Evaluating planetary digital terrain models: The HRSC DTM Test. _Planet. Space Sci._, 55, 2173-2191, doi: 10.1016/j.pss.2007.07.006.
* [PERSON] et al. (1999) [PERSON], et al., 1999. Digital photogrammetric analysis of the IMP camera images: Mapping the Mars Pathfinder landing site in three dimensions. _J. Geophys. Res._, 104 (E4), 8868-8888.
* [PERSON] et al. (2003a) [PERSON], et al., 2003a, Photoclinometry made simple ?, _ISPR Working Group II/9 Workshop \"Advances in Planetary Mapping 2003\"_, Houston, March 2003, online at [[https://astropedia.astrogeology.usgs.gov/download/Research/ISPRS/[PERSON]_ispers_mar03.pdf](https://astropedia.astrogeology.usgs.gov/download/Research/ISPRS/[PERSON]_ispers_mar03.pdf)]([https://astropedia.astrogeology.usgs.gov/download/Research/ISPRS/[PERSON]_ispers_mar03.pdf](https://astropedia.astrogeology.usgs.gov/download/Research/ISPRS/[PERSON]_ispers_mar03.pdf)).
* [PERSON] et al. (2003b) [PERSON], et al., 2003b. High-resolution topomapping of candidate MER landing sites with Mars Orbiter Camera Narrow-Angle images. _J. Geophys. Res._, 108(E12), 8088, doi:10.1029/2003 JE002131.
* [PERSON] et al. (2006) [PERSON], et al., 2006. Topompapping of Mars with HRSC images, ISIS, and a commercial stereo workstation, _Int. Arch. Photogram. Remote Sens. Spatial Inf. Sci._, XXXVI-4,Geospatial Databases for Sustainable Development\", Goa.
* [PERSON] et al. (2008) [PERSON], et al., 2008. Ultrahigh resolution topographic mapping of Mars with MRO HiRISE stereo images: Meteracle slopes of candidate Phoenix landing sites. _J. Geophys. Res._, 113, E0024, doi: 10.1029/2007 JE003000.
* [PERSON] et al. (2011) [PERSON], et al., 2011. Near-complete 1-m topographic models of the MSL candidate landing sites: Site safety and quality evaluation, _European Planetary Science Conference_, 6, EPSC2011-1465.
* [PERSON] et al. (2016) [PERSON], et al., 2016. The effect of incidence angle on stereo DTM quality: Simulations in support of Europa exploration. _ISPRS. Am. Photogram. Remote Sens. Spatial Inf. Sci._, III-4, 103-110, doi:10.5194/ispers-annals-III-4-103-2016.
* [PERSON] et al. (2017) [PERSON], et al., 2017. Community tools for cartographic and photogrammetric processing of Mars Express HRSC images. XLI-3-W1, 69-76, doi:10.5194/ispers-archives-XLI-3-W1-69-2017.
* [PERSON] et al. (2018) [PERSON], et al., 2018. Community tools for cartographic and photogrammetric processing of Mars Express HRSC images. _In Planetary Remote Sensing and Mapping_ ([PERSON], [PERSON], [PERSON], [PERSON], eds., Taylor & Francis, 107-124, doi:10.1201/97804295095997.
* [PERSON] et al. (2007) [PERSON], et al., 2007. Context camera investigation on board the Mars Reconnaissance Orbiter. _J. Geophys. Res._, 112, E05504, doi:10.1029/2006 JE0020808.
* [PERSON] et al. (2007) [PERSON], et al., 2007. Mars Reconnaissance Orbiter's High Resolution Imaging Science Experiment (HRISE), _J. Geophys. Res._, 112, E05502, doi:10.1029/2005 JE002065.
* [PERSON] and [PERSON] (1993) [PERSON], [PERSON], A.S., 1993. Further developments of Leica digital photogrammetric systems by Helava. _ACSM/ASPRS Annual Convention and Exposition Technical Papers_, 3, 256-263.
* [PERSON] and [PERSON] (1995) [PERSON], [PERSON], A.S., 1995. Die Entwicklung der digitalen photogrammetschen Systeme von Leie und Helava. _Z. Photogramm. Fernerkundung_, 63(1), 4-16.
* [PERSON] et al. (2010) [PERSON], et al., 2010. Ames Stereo Pipeline, NASA's open source automated stereogrammetry software. _Lunar Planet. Sci._, 41, 2364.
* [PERSON] et al. (2004) [PERSON], et al., 2004. _HRSC: The High Resolution Stereo Camera of Mars Express_. ESA Special Publications SP-1240.
* [PERSON] et al. (2017) [PERSON], et al., 2017, THE USGS Integrated Software for Imagers and Spectrometers (ISIS 3) instrument support, new capabilities, and releases. _Lunar Planet. Sci._, XLVIII, #2739.
* [PERSON] and others (2001) [PERSON], and 23 others, 2001. Mars Orbiter Laser Altimeter: Experiment summary after the first year of global mapping of Mars. _J. Geophys. Res._, 107, 23,689-23,722.
* [PERSON] et al. (1980) [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON], 1980. Definitions of terms and symbols used in photogrammetry. In _Manual of Photogrammetry_, 4 th Edition ([PERSON], [PERSON], and [PERSON], Eds), American Society of Photogrammetry, Falls Church, Virginia, 1056 pp.
* [PERSON] and [PERSON] (1997) [PERSON], and [PERSON], 1997. Adaptive Automatic Terrain Extraction. In _Proc. SPIE. 3072, Integrating Photogrammetric Techniques with Scene Analysis and Machine Vision III_, ([PERSON], [PERSON], [PERSON], eds.), 27-36.
* [PERSON] (2006) [PERSON], 2006. Towards a higher level of automation in softcopy photogrammetry: NGATE and LIDAR processing in SOCT SET. Paper presented at _Geocure Corporation 2 nd Annual Technical Exchange Conference_, Nashville, Tenn., 26-27 September 2006.
* [PERSON] et al. (2006) [PERSON], [PERSON], [PERSON], and [PERSON], 2006. Automatic terrain extraction using multiple image pair and back matching. Paper presented at _ASPRS 2006 Annual Conference_, Reno, Nevada, 1-5 May 2006.
|
isprs
|
EVALUATING STEREO DTM QUALITY AT JEZERO CRATER, MARS WITH HRSC, CTX, AND HIRISE IMAGES
|
R. L. Kirk, R. L. Fergason, B. Redding, D. Galuszka, E. Smith, D. Mayer, T. M. Hare, K. Gwinner
|
https://doi.org/10.5194/isprs-archives-xliii-b3-2020-1129-2020
| 2,020
|
CC-BY
|
isprs/228739e2_a827_4d9d_a2b2_ba0251ad84f2.md
|
# Spectral Imaging from UAVs under Varying Illumination Conditions
[PERSON]\({}^{1}\)
[PERSON]\({}^{1}\)
[PERSON]\({}^{2}\)
[PERSON]\({}^{2}\)
[PERSON]\({}^{3}\)
[PERSON]\({}^{3}\)
[PERSON]\({}^{4}\)
\({}^{1}\) Finnish Geodetic Institute, Department of Remote Sensing and Photogrammetry, Geodeetirinmine 2, P.O. Box 15, FI-02431 Masala, Finland - (teemu.hakala, eija.honkavaara)@fgi.fi
\({}^{2}\)VTT Technical Research Center of Finland, Photonic Devices and Measurement Solutions, P.O.Box 1000, FI-02044 VTT, Finland - (heikki.saari, jussi.makynen)@vtt.fi
\({}^{3}\)MTT Agrifood Research Finland, FI-31600 Jokioinen, Finland - (jere.kavosoja, lisa.pesonen)@mtt.fi
\({}^{4}\)Department of Mathematical Information Tech., University of Jyvaskyla, P.O.Box 35, FI-40014, Jyvaskyla, Finland - (ilkka.polonen)@jyu.fi
###### Abstract
Rapidly developing unmanned aerial vehicles (UAV) have provided the remote sensing community with a new rapidly deployable tool for small area monitoring. The progress of small payload UAVs has introduced greater demand for light weight aerial payloads. For applications requiring aerial images, a simple consumer camera provides acceptable data. For applications requiring more detailed spectral information about the surface, a new Fabry-Perot interferometer based spectral imaging technology has been developed. This new technology produces tens of successive images of the scene at different wavelength bands in very short time. These images can be assembled in spectral data cubes with stereoscopic overlaps. On field the weather conditions vary and the UAV operator often has to decide between flight in sub optimal conditions and no flight. Our objective was to investigate methods for quantitative radiometric processing of images taken under varying illumination conditions, thus expanding the range of weather conditions during which successful imaging flights can be made. A new method that is based on insitu measurement of irradiance either in UAV platform or in ground was developed. We tested the methods in a precision agriculture application using realistic data collected in difficult illumination conditions. Internal homogeneity of the original image data (average coefficient of variation in overlapping images) was 0.14-0.18. In the corrected data, the homogeneity was 0.10-0.12 with a correction based on broadband irradiance measured in UAV, 0.07-0.09 with a correction based on spectral irradiance measurement on ground, and 0.05-0.08 with a radiometric block adjustment based on image data. Our results were very promising, indicating that quantitative UAV based remote sensing could be operational in diverse conditions, which is prerequisite for many environmental remote sensing applications.
Photogrammetry, Geometry, Radiometry, Hyper spectral, Environment, Classification, High-resolution
## 1 Introduction
The Fabry-Perot interferometer (FPI) based light-weight spectrometric camera developed by the VTT Technical Research Centre Finland ([PERSON] et al., 2011; [PERSON] et al., 2011) is one of the most exciting new technologies in UAV sensors. It takes successive images at different wavelengths. These rectangular format images can be combined into a spectral data cube. During UAV operation these data cubes are overlapping each other, enabling the use of photogrammetric methods for producing spectral 3D information.
One of the most important advantages of UAVs is the flexibility of operation. UAV operator can more freely choose the moment of flight compared to manned aircrafts. Often the limiting factor for UAVs is the weather conditions. Many applications require clear sky conditions or at least even illumination. Unfortunately such days are sparse in many climate regions. We have developed processing methods for utilizing partially cloudy days for successful measurements. Improved post processing allows UAV operators to fly during the suboptimal days and still to provide acceptable radiometric quality, increasing the utilization rate of the equipment. All-weather data collection is also a prerequisite in many time critical applications.
Previously we have used the FPI camera in precision farming ([PERSON] et al., 2011; [PERSON] et al., 2012, 2013a) and in water quality mapping ([PERSON] et al., 2013b). Both of these applications are time critical requiring remote sensing data at exact dates. The utilization of the full potential of the FPI camera requires advanced methods for radiometric correction.
During summer 2012 we carried out imaging campaigns with the FPI spectral camera from manned and unmanned platforms. The weather was poor in Finland for remote sensing in the entire season, and as a result we had a lot of variable quality data. To overcome this we have investigated different post processing methods to improve the data uniformity. Our first approach was a radiometric block adjustment that was based on image information only ([PERSON] et al., 2012, 2013a). In this investigation, our objective was to investigate a new method, which utilizes insitu irradiance measurements. We describe theoretical background of our radiometric correction approach in Section 2. Empirical investigation is described in Section 3 and results are presented in Section 4.
## 2 Methods for radiometric correction of UAV image data
The conventional quantitative methods for physically based atmospheric correction of airborne images have been developed for multispectral and hyperspectral imagery operating with pushbroom imaging geometry ([PERSON] and [PERSON], 2002; [PERSON] et al., 2008). Recently, these methods are being extended for airborne sensors with rectangular image format ([PERSON] et al., 2011), and also for UAVs ([PERSON] et al., 2012, 2013a). Another extreme is the measurement of reflectance in field using goniospectrometers ([PERSON] et al., 2000; [PERSON] et al., 2008; [PERSON] et al., 2009). These methods providevery accurate information of reflection properties of objects. The measurement setup in local-area UAV applications is between these two cases. The major difference in comparison to remote sensing with manned aircrafts is related to flying height, which is typically 50-150 m for UAVs and 400 m - 10 000 m for manned platforms. Because of this, the atmosphere between the ground and measurement device causes significantly less disturbances with UAVs. In comparison to goniometric measurement, in typical UAV application the area of interest is larger and it contains differing levels of incident and diffuse illumination, and often quality requirements are not as high. Furthermore, UAV imaging is carried out often in diverse conditions. New methods are needed for radiometric processing of UAV-imagery for quantitative applications.
For a very low altitude imagery the central radiation components entering the sensor are the surface reflected sunlight (\(L^{\infty}(i)\)) and surface reflected diffuse radiance (\(L^{\infty}(i,j)\) (adopted from [PERSON], 2007)):
\[L_{\alpha,sensor}(i)=L^{\infty}(i)+L^{\infty}(i)=j=\rho(\lambda, \theta,\varphi,\theta,\varphi)t_{i}(i)\;t_{j}(i)\;t_{j}(i)\;E^{\theta}(i)\] \[cos(\theta(x,y))\dot{\pi}+F(x,y)(i,\lambda,\theta,\varphi)\;t_{j }(i)\;E^{\theta}(i)\;/\pi \tag{1}\]
where \(\rho(\lambda,\theta,\varphi,\theta,\varphi)\) is the bidirectional spectral reflectance distribution function (BRDF) and \(\rho(\lambda,2\pi,\theta,\varphi)\) is reflectance distribution function for diffuse light, \(\tau_{i}(i)\) and \(\tau_{i}(j)\) are the atmospheric transmittance in the view and solar paths, respectively, \(E^{\theta}(i)\) is the spectral irradiance on top of the atmosphere, \(E^{\theta}(i)\) is the spectral irradiance at the surface due to diffuse illumination and \(\theta\) is the solar incidence angle on a surface. \(\theta\) and \(\theta_{x}\) are the illumination and reflected light zenith angles and \(\varphi_{i}\) and \(\varphi_{p}\) are the azimuth angles, respectively. _F(x,y)_ is the fraction of hemisphere visible to the sensor. To solve reflectance factor, information about diffuse light and other atmospheric influences are needed ([PERSON] et al., 2000; [PERSON] and [PERSON]\(\ddot{\rm u}\)ffer, 2002; [PERSON], 2006; [PERSON] et al., 2008; [PERSON] et al., 2009).
We have developed a radiometric block adjustment method for UAV image blocks in order to produce homogeneous data from non-homogeneous input data ([PERSON] et al., 2012, 2013a). The basic principle of the approach is to use radiometric tie points in overlapping images and to determine a model for the differences in the grey values. Currently, we use the following model for a grey value (digital number, DN):
\[DN_{jk}=a_{nt,j}(a_{mk}\;R_{jk}(\theta,\varphi_{0},\theta_{p})+b_{mk,j})+b_{mk, j} \tag{2}\]
where \(R_{jk}(\theta,\varphi_{0},\theta_{p},\varphi_{p})\) is the bi-directional reflectance factor (BRF) of the object point, \(k\) in image \(j\); \(a_{mk}\) and \(b_{mk}\) are the parameters for the empirical line model for reflectance transformation and \(a_{nt,j}\) and \(b_{mk,j}\) are relative correction parameters with respect to the reference image. This model includes many simplifications, but can be extended by physical parameters.
In this study, we investigated how the insitu irradiance measurements (total irradiance with solar and diffuse components) could be utilized in the image block correction. With certain assumptions, it can be shown that differing levels of irradiance can be eliminated from at-sensor radiance measurements in two images by:
\[L_{jk}(i)_{\alpha,sensor}=L_{jk}(i)_{\alpha,sensor}(E_{jk}(i)/E_{mk}(i))=L_{jk}( i)_{\alpha,sensor}\;C(i) \tag{3}\]
Where _C(j)_ is a correction factor for image \(j\) to normalize irradiance level to that of a reference image _ref_. If the assumptions are not valid, the accuracy of the correction will be reduced. When applying this method to DNs, ignorance of sensor's absolute radiometric calibration parameters (\(c_{i}\), \(c_{2}\)) can cause some inaccuracy to the result; the dependency of DN and at-sensor radiance is \(DN\)=\(c_{i}(L_{jk})_{\alpha,sensor}+c_{2}\).
## 3 Experimental Data
### FPI spectrometric camera
A spectrometric camera (Figure 1) based on a Piezoactuated Fabry-Perot interferometer (FPI) with an adjustable air gap has been developed in VTT Technology Research Center of Finland ([PERSON] et al., 2011; [PERSON] et al., 2011). The imager rapidly changes the wavelength filter (FPI) pass band and takes successive images at different wavelength settings. These images are taken in less than a second and can be later on combined into a spectral data cube. The spectral range of the imager is 400-1000 nm with full width at half maximum (FWHM) of 10-40 nm. The number and spectral properties of the channels can be selected flexibly for each application. The 2012 prototype of the imager weighs about 600 g and has image size of 1024 x 648 pixels with 11 um pixel size. GPS and irradiance sensor can be connected to the imager.
### In situ irradiance measurement
We used two methods in measuring the irradiance: a ground based measurement and an irradiance sensor in the UAV (Figure 2).
During the flights, an ASD FieldSpec pro FR spectroradiometer (Analytical Spectral Devices Inc., Boulder Colorado) was positioned to the area being measured. The spectroradiometer had 180\({}^{\circ}\) cosine collector irradiance optics viewing the entire hemisphere of sky and was measuring the spectral irradiance (W/m/m) at spectral range of 350-2500 nm. Full width at half maximum (FWHM) of the irradiance spectrum was 3 nm for 350-1000 nm. A GPS receiver was attached to the spectroradiometer and a GPS time was acquired for each spectra. The data of the spectroradiometer and the UAV images were synchronized with GPS times.
For each FPI imager image and spectral layer same wavelength channels at same FWHM were separated from the ASD irradiance spectra taken at same time as the FPI image. For each image this gives an irradiance reference value for each spectral layer. These irradiance reference values were normalized between 0 and 1 and used to calculate multiplicative correction factors _C(j)_ (Equation 3).
Figure 1: The Fabry-Perot interferometer spectral camera with GPS receiver, battery and irradiance sensors.
The irradiance sensor installed in the UAV is based on the Intersil SL29004 photodetector (Intersil, 2011). The structure of the down dwelling irradiance sensor is described in figure 3. The photodetector dimensions are 3 mm x 3 mm x 1.0 mm. The signal dynamic range is 16 bits. The spectral sensitivity range is 400 - 1000 nm (D2 output used, see Figure 4). The acceptance angle of the sensor has been increased by placing an opal glass diffuser in front of the sensor (distance - 3 mm). The sensor was not calibrated to measure in Wm\({}^{2}\) and only relative, broadband irradiance intensity values were obtained, providing single correction factor for all layers, \(C_{\rm{r}}\).
### Campaign with UAV
An empirical campaign was carried out at the MTT Agrifood Research Finland (MTT) agricultural test site in Vhti (N 60\({}^{\circ}\) 25' 21\", E 24\({}^{\circ}\) 22' 28\"). The test area, a 76 m x 385 m (2.9 ha) patch of land, has a rather flat topography with a terrain height variation of 11 m and maximum slopes of 3.5 degrees. There were altogether 10 targeted XYZ ground control points (GCPs), with a relative accuracy of 2 cm and 10 natural XYZ GCPs, with a relative accuracy of 20 cm.
An image block was collected with the FPI spectral camera using a single-rotor helicopter UAV with a 5 kg payload and autopilot to enable an autonomous flight (Figure 2). The campaign was carried out between 10:39 and 10:50 in the morning local time (UTC +3). During the campaign, the illumination conditions were poor with fluctuating levels of cloudiness. The solar elevation and azimuth angles were 43\({}^{\circ}\) and 125\({}^{\circ}\), respectively. The flight was carried out at a flying altitude of 140 m, producing GSD of 14 cm; the flying speed was 3.5 m/s. The block used in this investigation consisted of five image strips and a total of 80 images; the forward and side overlaps were 78% and 67%, respectively (Figure 5).
The FPI camera was equipped with a 500-900 nm filter. The camera was operated in free running mode and took spectral data cubes at the given intervals; the integration time was 5 ms. For the data, the usual radiometric calibration preprocessing steps were applied (Section 3.4). There were a total of 42 bands in the original raw data out of which it was possible to generate 36 spectral smile-corrected spectral layers. However, the images lacked data in some of the layers; so, ultimately, there were altogether 30 corrected spectral layers.
\begin{table}
\begin{tabular}{|p{113.8 pt}|} \hline Central wavelength (nm): (_507.60, 508.80, 509.50, 511.80,_ \\ 517.90, 526.60, **7: 535.50**, 544.20, 553.30, 562.50, 573.10, 582.70, 590.60, 595.00, **16:66.20**, 620.00, 634.40, 648.00, 662.50, 678.30, 693.80, 707.00, 716.80, 728.20, 742.90, 757.00, 772.10, **29:78.78.50**, 801.60, 815.70, 830.30, 844.40, 859.00, 873.90, 887.30 \\ \hline FWHM (nm): (_15.19, 16.73, 146.99, 196.66, 23.81, 25.53, **7:** 24.87, 22.65, 239.00, 22.71, 215.40, 18.32, 41.14, 22.11, 16.43, 414.6, 410.5, 35.33, 40.39, _(36.48, 38.32, 33.46),_ 29.88, 32.73, 32.81, 27.58, 31.83, **29:32.12**, 25.87, 28.23, 29.53, 26.54, 28.32, 28.42, 26.41 \\ \hline \end{tabular}
\end{table}
Table 1: Parameters of the UAV flight on 2.7.2012. Distributed layers are in slanted and the layers 7, 16 and 29 are in bold.
Figure 4: Relative spectral response plot of the Intersil ISL29004 photodetector. The normalized D2 output was used in the down dwelling irradiance sensor module.
Figure 5: Image block and flight lines.
Figure 3: The optical construction of the sensor module. An opal glass diffuser was used in the module to increase the acceptance angle of the sensor.
Figure 2: Left, the cosine collector of the ASD FieldSpec pro was placed on a tripod about 1.5 meters from the ground. Right, the FPI spectral camera mounted under a helicopter UAV and the irradiance sensor mounted to the tail boom.
### Processing of FPI imager data
We have developed a processing chain for FPI imagery:
1. System corrections of the imagery using the laboratory calibration, spectral smile correction and dark signal correction. These corrections values and algorithms are provided by VTT ([PERSON] et al., 2011; [PERSON], 2013).
2. Format transformation from float format to unsigned integer format.
3. Matching of layers of individual images to eliminate layer mismatch ([PERSON] et al., 2013a).
4. Determination of image orientations of reference layers using a self-calibrating bundle block adjustment ([PERSON] and [PERSON], 2012).
5. Optionally also a DSM can be calculated ([PERSON] and [PERSON], 2012).
6. Determination of radiometric imaging model to compensate radiometric disturbances from images, as well as reflectance transformation. A radiometric block adjustment method is being developed to determine optimal parameters by utilizing overlapping images ([PERSON] et al., 2012, 2013a).
7. Calculation of output georeferenced reflectance products, such as 3D point clouds and spectrometric image mosaics ([PERSON] et al., 2012).
We presented the results of processing steps 1-5 of the Vihti data in our recent publication ([PERSON] et al., 2013a). Our emphasis in this study is to carry out radiometric processing of orthophoto mosaics. Based on the previous assessment, the expected planimetric accuracy is 0.4 m.
In this investigation, our major emphasis was to improve methods for radiometric processing in step 6. Multiplicative correction factors were calculated using the irradiance measurements carried out in UAV as well as in ground (Equation 3). Furthermore, we used radiometric block adjustment with two parameter sets: BRDF model and \(\text{b}_{\text{tot},\text{\text{\text{\text{\text{\text{\text{\text{\text{ \text{\text{\text{\text{\text{\text
Image mosaics calculated for each correction case are shown in Figure 8. The block adjustment appeared to provide the best internal uniformity, but also some brightening appeared towards to the strips collected in brightest conditions (especially with BRDF correction, which was not ideal for the data collected in variable cloudy conditions). The correction factors based on irradiance measurements improved the uniformity greatly in comparison to the uncorrected case, but internal uniformity was not as good as in block adjusted data (brightness differences in individual images). On the other hand, the mosaics with irradiance based correction appeared to provide better absolute radiometric quality (no drift) than the block adjustment.
The results of homogeneity evaluation are shown in Figure 9. The average coefficient of variation was 0.14-0.18 if radiometric correction were not performed. The correction based on irradiance measurement in UAV provided the homogeneity on the level of 0.10-0.12 and the correction using ground irradiance measurement provided even better results, variation coefficient was 0.065-0.09. The possible reasons for poorer performance of UAV-method could be the installation issues and/or the use of broadband irradiance instead of spectral irradiance. The radiometric block adjustment with multiplicative correction term provided the best homogeneity, on the level of 0.05-0.075; the relative offset and BRDF correction did not fit to the data. These homogeneity results are poorer than what we obtained from the analysis of single strip (0.02-0.04) ([PERSON] et al., 2013a), but especially the corrections with ground irradiance measurement and relative adjustment are likely to be accurate enough for agricultural application. It should be noted that in addition to the differences caused by illumination, the observing the object from different directions can have some influence on the results.
Figure 8: Image mosaics with different corrections. From top left to bottom: no correction, rel UAV, rel ground, BA: relB, BRDF; BA: relA (Section 3.4).
Figure 7: Multiplicative correction factors of different cases (see Section 3.4) for layers a) 7 and b) 29. c) Relative differences of different correction factors to correction factor based on ground irradiance measurement in layer 29 (in %).
Figure 9: Average coefficients of variation (homogeneity) at radiometric tie points for non corrected data (no corr) and different correction case (Section 3.4).
## 5 Discussion and Conclusions
We presented a new method based on insitu irradiance measurement for radiometric correction of UAV imagery that has been collected in variable imaging conditions.
We compared the new method to our previously developed radiometric block adjustment method based on image information only. Our conclusion was that the new method provided better absolute consistency than the method based only on image information. On the other hand, the block adjustment method provided the highest internal consistency. The radiometric correction based on spectral irradiance measured on ground provided better results than the correction based on broadband irradiance measured in UAV. However, it is likely that if the conditions were very complex, the illumination changes would be measured more accurately in the UAV. Ideally, spectral irradiance is measured in ground and in UAV. The results indicated that the best results could be obtained by a combined adjustment approach by integrating insitu irradiance measurements and image measurements.
Our theoretical considerations showed that the new approach is rigorous in certain circumstances. The typical measurement setup in UAV operation gives many possibilities for advanced radiometric correction. In many cases it is easy to integrate irradiance sensors to the UAV. Furthermore, it is often also possible to provide ground reference targets and measurement devices in target area, because the flights are often operated locally. More comprehensive approach will enable better reflection characterization. In all these cases the solution should depend on the application, and for many applications simple field operation and low cost are the critical factors.
Our results were very promising, indicating that high accuracy UAV remote sensing, with stereoscopic and spectrometric capabilities, is possible also in diverse conditions. This makes these methods suitable for many environmental measurement and monitoring applications. In the future we plan to continue the development of radiometric correction methods, and also evaluate the requirements of different applications.
## 6 Acknowledgements
The research carried out in this study was partially funded by the Academy of Finland (Project No. 134181). We are grateful to the colleagues at Finnish Geodetic Institute, [PERSON], [PERSON], [PERSON] and [PERSON], for their support and for assisting us in field campaigns in summer 2012.
## References
* [PERSON] et al. (2008) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2008. Atmospheric correction, reflectance calibration and BRDF correction for ADS40 image data. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 37 (Part B7), (on CD-ROM).
* [PERSON] et al. (2000) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2000. Use of a wide angle CCD line camera for BRDF measurements. Infrared Physics & Technology 41: 11-19.
* [PERSON] et al. (2012) [PERSON], [PERSON], [PERSON], J., [PERSON], J., [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], H., [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON], [PERSON], 2012. Hyperspectral reflectance signatures and point clouds for precision agriculture by light weight UAV imaging system, ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., 1-7, 353-358,
* [PERSON] et al. (2013a) [PERSON], [PERSON], [PERSON], H., [PERSON], J., [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2013a. Processing and assessment of spectrometric, stereoscopic imagery collected by a light weight UAV spectral camera for precision agriculture. Submitted.
* Case studies in water quality mapping. ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., Vol. XL-1/W1, ISPRS Hannover Workshop 2013, 21
- 24 May 2013, Hannover, Germany
* Intersil (2011) Intersil 2011, ISL29004 datasheet [[http://www.intersil.com/content/dam/Intersil/documents/info2/fn](http://www.intersil.com/content/dam/Intersil/documents/info2/fn)]([http://www.intersil.com/content/dam/Intersil/documents/info2/fn](http://www.intersil.com/content/dam/Intersil/documents/info2/fn)) 6221.pdf, (accessed 18.4.2013).
* [PERSON] et al. (2011) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2011. An approach to the radiometric aerotriangulation of photogrammetric images. ISPRS Journal of Photogrammetry and Remote Sensing 66 (6), 883-893.
* [PERSON] et al. (2011) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2011. Unmanned aerial vehicle (UAV) operated megapixel spectral camera, Proc. SPIE 8186B.
* [PERSON] (2002) [PERSON], Instructions for FP_HC_Viewer.exe program. VTT Technical Research Centre of Finland, 10 p. 11.1.2013.
* [PERSON] and [PERSON] (2002) [PERSON], [PERSON], 2002. Geo-atmospheric processing of airborne imaging spectrometry data. Part 2: atmospheric/topographic correction. International Journal of Remote Sensing, 23, 2631-2649.
* [PERSON] and [PERSON] (2012) [PERSON], [PERSON] [PERSON], 2012. Point Cloud Generation from Aerial Image Data Acquired by a Quadrocopter Type Micro Unmanned Aerial Vehicle and a Digital Still Camera. _Sensors_ 2012, 12, 453-480.
* [PERSON] et al. (2011) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], K., [PERSON], [PERSON], 2011. Unmanned Aerial Vehicle (UAV) operated spectral camera system for forest and agriculture applications, Proc. SPIE 8174.
* [PERSON] et al. (2006) [PERSON], [PERSON], [PERSON] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2006. Reflectance quantities in optical remote sensing--definitions and case studies, Remote Sensing of Environment 103 (1), 27-42.
* [PERSON] et al. (2008) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] 2008. The Improved Dual-view Field Goniometer System FIGOS. _Sensors_ 8, no. 8: 5120-5140.
* Academic Press Inc., San Diego, CA, USA.
* [PERSON] et al. (2009) [PERSON] [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], 2009. Polarised Multiangular Reflectance Measurements Using the Finnish Geodetic Institute Field Goniospectrometer. _Sensors_ 2009; 9(5): 3891-3907.
|
isprs
|
SPECTRAL IMAGING FROM UAVS UNDER VARYING ILLUMINATION CONDITIONS
|
T. Hakala, E. Honkavaara, H. Saari, J. Mäkynen, J. Kaivosoja, L. Pesonen, I. Pölönen
|
https://doi.org/10.5194/isprsarchives-xl-1-w2-189-2013
| 2,013
|
CC-BY
|
isprs/b5e4491e_87e1_4dfd_957e_9088ff0021f1.md
|
# Monitoring and Assessment of tenure instruments for land administration and management
[PERSON]\({}^{1}\)
Corresponding author
[PERSON]\({}^{2}\)
[PERSON]\({}^{3}\)
###### Abstract
The Unified Map and Data Sharing or UMD-Sharing Project are tools and protocols for the preparation of a unified map for the various tenure instrument. It aims to provide NGAs and LGUs with guidelines on how to effectively utilize several tenure data from various NGA and LGU sources. With varying scales of data sources, in both spatial and textual dimensions, the project aims to explore these dimensions to create the Unified Map's spatial data model. In general, map is regarded as a basic tool in land administration and management (LAM) and a unified map's solution of harmonizing these disparate tenure instruments could mitigate conflicts. To implement the Unified Map and to show how data sharing is handled among the different organizations, including the LGUs, the project developed a QGIS plugin interface that can utilize a remote database repository.
Tenure Instruments, GIS, LIS, Land Administration and Management, Geospatial Solutions +
Footnote †: Corresponding author
## 1 Introduction
### Background
Goal 15 of the Sustainable Development Goals (SDG) pertains to life on land. Specifically, it aims to protect, restore, and promote sustainable use of terrestrial ecosystems, sustainably manage forests, reverse land degradation, and stop biodiversity loss. It is circumstantial that all these challenges are land related. It is imperative that we acknowledge that man, specifically its activities on land, has a huge impact on the landscape and the environment. It is, therefore, necessary to regulate the use and behavior of man towards land. An important target in achieving these is addressing the challenge of land administration in a country, specifically land tenure.
Land tenure is one of the basic functions of land administration, invented by society, to regulate the use of land by allocating the necessary rights, responsibilities and restriction to certain individuals or groups. In the Philippines, while challenges from population increase, climate change and natural and man-made disasters are of primary considerations, it is cited that the insecurity of land tenure and property rights is the triggering, if not the primary, cause of these land-related challenges.
The Philippines has an overall strong framework on land tenure rights as indicated in the 1987 Constitution. There are four major tenure-related reforms since the 80s - CARP/ER, IPRA, UDHA & Fisheries Code, each seeking to create a more secure neutral system and each implementing in their own independent capacity. However, these systems also lead to conflicting and fragmented policies creating overlaps in the mandates and functions of various land administration units (LAUs) - national government agencies (NGAs) and local government units (LGUs). These further leads to overlaps in land boundaries that defines land tenure. For instance, some instruments are issued by NGAs and others are issued by the LGUs, creating a complicated delineation of tenure. Furthermore, tenure rights can be further extended to other users to instruments such as rentals, lease, special permits, and contracts. Each land administration units also maintains their own land information systems (LIS), which are usually remote and independent from each other. In short, there are many tenurial instruments and systems that require review and assessment. This research aims to codify and protect these unconsolidated information of tenure instruments.
### Research Objectives and Significance
This research aims to assess and pose a GIS solution on the overlapping tenure instruments, by addressing key issues in mapping and GIS data creation and storage. Furthermore, consolidation of these data into a harmonized unified map that follows a single GIS standard would reduce many issues revolving the integrity of the dataset itself.
In addition to this, the research also aims to provide a solid mechanism for data sharing among the producers and consumers of data from various NGAs and LGUs. The Unified Map Tool integrates these two objectives in its design and development.
Upon completion, the UMD-Sharing Project will provide NGA and LGU capability to
1. Effectively use maps and information for various land administration and land management functions.
2. Adopt existing standards from well-defined protocols and guidelines.
3. Integrate maps and information from various data organizations.
4. Create maps with ease using tools that follow the domain and data model of the Unified Map system.
5. Enable interoperability and data sharing among NGA and LGU.
## 2 Review of Related Literature
### Maps in Land Administration and Management
Maps are basic tools for efficient and effective land administration and management. Recent initiatives of the government are to allow data sharing within and among NGAs and LGUs, empowering the communities with more facts usedfor critical decisions. To realize such initiatives, there are a few fundamental technical considerations that if not met would make such realizations a challenge. For the land administration and management (LAM) sector, particularly the tenure domain, these challenges exist and are primary subject of this research.
A related study, Review of Land-related Laws and Policies on Tenure (2019), mentions two of the technical problems in LAM. The first problem pertains to the fragmented nature of the land administration function in the country, manage by different land agencies and their executive department. Although it seems positive that this creates a check and balance among NGAs, it also creates confusion especially if the ternual instruments and functions overlap with each other. The second problem, which is related to the former, is the lack of policy in data sharing among these executive departments. This exacerbates the former problem with the creation of their independent databases that are not compatible with each other. Although the culture of creating copies of digital information have been in practice in the country, the burden of cleaning and preprocessing were thrown to the consumers leading to ineffective utilization of the data.
### 2.2 Geographic and Land Information Systems
The design and creation of a Unified Map for land administration purposes is one of the milestones prioritized by the UM-DataShare project. This milestone is an answer to one of the many challenges in Philippine mapping and Land Information System implementations. As mentioned above, there is need for maps and spatial data for effective LAM. However, due to the overlapping and unconsolidated preparation of these data by the agencies mandated to provide them, their use for further analysis and consumption, particularly by LGI, is never an easy task. Devising a standard for a Unified Map that will consolidate and prepare maps from various agencies together with DENR LAMS is a big leap in supporting our LGU's LAM functions.
To achieve this, the UM-DataShare project needs to revisit and assess the existing LIS of the country, particularly from the DENR, Land Registration Authority (LRA), Department of Agricultural Reform (DAR), and the National Commission of Indigenous Peoples (NCIP). Each of these government institutions has established their own LIS independent from one another. Existing LAM standards such as ISO 19152 or the Land Administration Domain Model and some ISOs by ISO/TC 211 (Technical Committee for Geographic Information/Geomatics) are established approach in the design and implementation of LIS. These standards are the guiding principle in the implementation of the Unified Map.
## 3 Methodology
### 3.1 Preliminary Consultation and Assessment
Most data requests and inventory activities are conducted through the efforts of our DENR focal personnel and their GIS operators in each respective region. A requirement workshop was conducted with the stakeholders to gather expectations and needs.
It was in the data inventory activity that the researchers came to realize that not all datasets were GIS-ready. Some data were missing and/or issued by a different issuing agency. But some missing data were unavailable simply because the region doesn't have that kind of tenure instrument issued by the responsible agency. Although all data collected were currently digital, a quality assessment, both in terms of geometry and attributes are needed. Map projection was a common problem. Another issue was the schema or attribute table was missing or was not originally specified in the dataset. In this case, the dataset is only representing the geometry of the features and not the information it conveys.
During the requirement workshop, many of these issues were expressed by the agencies and LGUs involved. Collectively, data models for these tenure instruments are going to be a priority in the project. The lack of dataset specification is the cause of many of these challenges. Combined with capacity building and training, it can be a viable solution to these problems.
### 3.2 The Unified Map Development
As stated in the research objectives, the creation of a Unified Map allows integrating land information from several NGA and LGU sources into a single GIS map instance. To achieve this, all spatial data considered must be prepared according to the standard and specification defined by the Project.
By using the Unified Map, each submitted or uploaded dataset becomes part of a spatial database and serves as an updating and maintenance module for the Unified Map Database Repository. GIS standards is a fundamental requirement in the Unified Map, due to the nature of the source data, which comes from several independent NGAs and LGUs. Adopting to this standard allows data layers (e.g. tenure instruments) to be integrated in a consistent land information managed and maintained by the Unified Map. The Unified Map GIS Standard and Specification follows the conceptual framework of the project as shown in the figure below.
#### 3.2.1 The Unified Map GIS Standard and Specification
The primary objective of the research is the creation of a unified map that follows a GIS standard. Two important assumptions were established for the Unified Map GIS Standard and Specification. To transform tenure instrument to the target specification, the first assumption and the starting point of the Unified Map activities is the availability of existing tenure instrument layers. The minimum requirement for the \"Source\" submission is the ESRI shapefile format. Conversion to this format is the responsibility of the NGAs or LGUs producing the tenure instrument.
With the data sharing protocol among NGAs and LGUs handled within the Unified Map Tool, the more it necessitates the \"Source\" data to comply with this standard. In addition, future connections to existing spatial data infrastructures (SDIs) such as DENR One Control Map or NAMRIA Geoportal requires the source data to follow the Unified Map Standard and Specification.
Figure 1: Conceptual framework of the research
The second assumption is the description of our \"Target\" output, which is the Unified Map Specification itself. This includes technical descriptions such as coordinate reference systems, database schema or attributes. Also included are the symbology, labels, and other mapping component. A metadata format was also included to support and describe the data created following the unified map data specification.
#### 3.2.2 The Data Sharing Protocol
The second objective of the project is to improve access to relevant land information by making the Unified Map accessible to MSAs and LGUs for land administration and management purposes. This was implemented by creating a plugin software based on popular GIS software called Quantum GIS (or QGIS), QGIS is an open-source GIS software used by our NGAs and LGUs.
The main design requirement adopted in the development and implementation of the Unified Map is based on an open GIS principle. Open GIS is the full integration of geospatial data into mainstream information technology. This practically means, that the users have the means to access and share data over a GIS software without having to worry about format conversions or proprietary data types.
The design considerations for the Unified Map are as follows:
1. The unified map should be capable of handling spatial and non-spatial data.
2. The unified map should be implemented using robust and mature open source technology.
3. The unified map should be interoperable with standard interfaces.
4. The unified map should be extensible.
5. The unified map should be secure.
These considerations are the primary reason for choosing QGIS as the main GIS tool and its plugin system for extensibility. In addition to QGIS, postgSQL and postGIS were also incorporated in the system to enable spatial data operations and security of the data. Both QGIS and postGIS are Open Geospatial Consortium (OGC) compliant technology.
## 4 Results and Discussion
The Unified Map and Data Sharing Project (or simply Unified Map) are set of tools and protocols for the preparation of various tenure instruments. It aims to provide national government agency (NGA)'s regional units and local government units (LGI) with procedures on how to effectively access and utilize several tenure instrument data. Covering pilot sites for the 3 regions - Region 10 (Cagayan de Oro), Region 11 (Davao) and Region 13 (Butuan), it implements the two objectives discussed in the previous section, specifically, to create a standard GIS specification for the tenure instruments and to provide a prototype access for data sharing.
The Unified Map Architecture is shown in the figure below:
This architecture enables the Unified Map and Data Sharing Tool to integrate a data delivery system for the project. The tool comprises of a client and a server component:
1. the client is the QGIS and the Unified Map Plugin comprising of a Loading and Catalogue Module.
2. the server is postGIS Database configuration.
The plugin client can be installed in a QGIS 3.22+ for all the identified user roles. The Unified Map Database Repository is a PostgSQL with postGIS 3.x extension installed in an Amazon Web Services (AWS) Relational Database Services (RDS) instance.
The incorporation of a postGIS in the QGIS Plugin implementation allows the Unified Map Layers and its metadata to be stored in a designated database. Furthermore, being OGC-compliant allows the Unified Map to be interoperable to any succeeding GIS development utilizing the Unified Map data layers such as DENR One Control Map or NAMRIA geoportal, which also follows the OGC standards.
The development of the Unified Map tool provides important user functionalities for managing tenure instruments. The following discussion shows some of the important software components and user interface in the Unified Map Tools. Functionalities to upload and share data are incorporated into the tool.
### The GIS in Unified Map
As stated in the conceptual framework, the creation of a Unified Map allows integrating land information from several NGA and LGI sources into a single GIS map instance. To achieve this, all spatial data considered in the Project must be prepared according to the standard and specification recommended by the Unified Map. See **Appendix** for a sample Unified Map Specification.
By using the Unified Map Tool, each submitted or uploaded dataset becomes part of a spatial database and serves as an updating and maintenance module for the Unified Map Repository. GIS standards is a fundamental requirement in the Unified Map Tool, due to the nature of the source data, which comes from several independent NGAs and LGUs. Adopting to this standard allows data layers (e.g. tenure instruments) to be integrated in a consistent land information managed and maintained by the Unified Map Tool.
The figure below shows the Welcome and Login page of the Unified Map. Licensor and Regular Users of tenure instrument data are given separate user credentials. It is necessary to have a minimal user control in this application to monitor the uploading and downloading of data.
Figure 2: Unified Map Components Figure 3: Welcome and Login Page
### The Data Sharing in Unified Map
In its current version, the Unified Map has two data sharing modules: These modules facilitate the uploading and downloading of tenure instrument in its data sharing protocol.
1. Loading Module
2. Catalogue Module
#### 4.2.1 Loading Module
The Loading Module is the main interface for uploading tenure instrument. Only the licensor (e.g. owner and uploader of tenure instruments) can upload data in the Unified Map Repository using this module. In this interface, the licensor selects an ESRI shapefile in its local repository to upload and to build its metadata entry. The metadata is strictly mandatory for licensors to upload their shapefiles in the Unified Map Repository. Data discovery in the catalogue module relies on the metadata entries, and so an incorrect metadata can make an uploaded tenure instrument data undiscoverable in the catalogue module.
Metadata is necessary in a data sharing system. Metadata records the who, what, when, where, how, and why of a data resource. There are already existing standards in metadata as seen in this site [[https://www.fgdc.gov/metadata](https://www.fgdc.gov/metadata)]([https://www.fgdc.gov/metadata](https://www.fgdc.gov/metadata)). Metadata information for the various tenure instruments were generated and stored in standard file formats. Unified Map metadata are organized in javascript object notation (JSON) format. The selection of json format is rooted from the fact that json follows a strong objectioned pattern in structuring data content. QGIS python scripts and the PostgreSQL database also supports json files.
These metadata information is incorporated in the Unified Map Loading Module of the plugin. It helps bind the different configuration of the data layers. Figure 5 below shows the metadata builder in Unified Map.
Figure 6 below shows a sample javascript object notation (json) file featuring the metadata entry in the Unified Map Repository.
Figure 4: Loading Module
Figure 5: Metadata Builder\"attribute\": \"Region\", \"label\": \"the region\", \"description\": \"the region\", \"displayOrder\": \"2\" ] ] ]
The following are the essential metadata fields for the Unified Map Data Layers according to the json file.
1. Title - preferred naming convention \"Location, Data Type (YYYY-MM-DD)\".
2. Abstract - Provides additional information and enables users to better assess the data resource's fitness for use.
3. Publication Date - Date edited updated (YYYY-MM-DD)
4. Keywords - Short, specific keywords, such as Tenure Type (e.g. MA)
5. Category - standard data group or category
6. Group - NGA or LGU group
7. Regions - Keyword identifies a location (Project site name, Province, City/Municipality)
8. GCS - the geographic coordinate system information of the data resource
9. Projection - Map Projection of the data resource
10. Status - the completion status of the data resource
11. Purpose - Summary of the intentions with which the resource(s) was developed
12. Data Quality Statement - General explanation of the data producer's knowledge about the lineage of a dataset. Must include stages of processing, software used and accuracy.
13. Restrictions - Limitation(s) placed upon the access or use of the data.
14. Attributes - list of attributes including descriptions and labels of the data layer.
#### 4.2.2 Catalogue Module
The second module is the Catalogue Module. The Catalogue Module allows discovery and access to the Unified Map Repository. This is patterned on the publish-find-bind paradigm used in SDI and geoporatals. Both licensee (i.e., regular user) and licensor (i.e., uploader user) can use this module.
Figure 7 below shows the user interface of the catalogue module and its data discovery interface. The metadata of each tenure instruments are also viewable in the catalogue module.
The Unified Map Tool integrates both the unified map transformation and delivery system required for data sharing. User credentials with secure database access ensures the integrity of the unified map layers in the repository. Licensor and licensee are provided basic information on the use of the data through the metadata entries. These are linked and stored for each layer. Ownership of the tenure instruments are retained to the licensor who has the sole responsibility for the layer's maintenance and updating.
## 5 Summary and Conclusion
Land administration and management can benefit from geographic and land information systems. Adopting to best practices in mapping and information technology will further improve and answer the challenges of land tenure. International mapping and GIS standards are the way forward to make data shareable.
However, in any GIS and Land Information System (LIS) project, the people component is a crucial part for its sustainability and proper working condition. Users should be knowledgeable of the features and workflow of the system. New software systems are usually met with resistance from users. This is normal as new products and software need time to integrate to existing workflow.
Furthermore, proper organizational and institutional setup should be established to sustain and maintain the existence and use of this technology. It becomes an abstract thinking if there is no established mechanism to implement any LIS. This starts with an LIS Office and its resources and people. Although not directly connected to the Unified Map project, this operational solution will benefit the NGA and LGU in the long term.
## References
* [1]
* [2] Center for Environmental Law and Policy Advocacy, Inc., 2019. Towards Sustainable Development: Integrated Approach for Land Governance. A Study for the GIZ-Responsible Land Governance in Mindanao Program
* [3] FAO., 2022. Voluntary Guidelines on the Responsible Governance of Tenure of Land, Fisheries and Forests in the Context of National Food Security. First revision. Rome. [[https://doi.org/10.4060/i2801e](https://doi.org/10.4060/i2801e)]([https://doi.org/10.4060/i2801e](https://doi.org/10.4060/i2801e))
* [4] [PERSON] et al, 2015. The land administration domain model. _Land Use Policy_
* [5] [PERSON], [PERSON], 2009. Data Flow Diagram. In: Modeling and Analysis of Enterprise and Information Systems. Springer, Berlin, Heidelberg
* [6] [PERSON] and [PERSON], 2014. Review of Selected Land Law and Governance of Tenure in the Philippines. A Study for the GIZ-Responsible Land Governance in Mindanao Program
Figure 6: Metadata in JSON format
Figure 7: Catalogue and Data Discovery in Unified Map
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume X.IV/III-4/W8-2023
Philipping Geomatics Symposium (PhiGECS) 2023, 6-7 December 2023, Dillman, Quezon City, Philippines
## Appendix
* [1]**Tenure Instrument:** Certificate of Accustrial Domain Tte (CADT)
* [2]**Involved Documents:** Republic: Act No. 8371
* [3]**Data Format:** vector magnetic
* [4]**Abstract:**
* [5]**Conflict of Accustrial Domain Tte. These are titles that were issued by the NCP.
* [6]**Keywords:** Boundaries
* [7]**Unified Map Scheme:**
* [8]**PELLO NAME TYPE PHELLD DESCRIPTION OOMAN VALLD VALUES
* [9]**CADT, NO
* [10]**Refers to the CADT Number
* [11]**DATE_APPROO Tent 500 Editors to the Big Date Date the CADT run Approped
* [12]**AREA Dodule 500 Editors to the AREA of each CADT
* [13]**HOLDER Tent 100 Editors to the AREA of each CADT
* [14]**HOLDER Tent 100 Editors to the AREA of each CADT
* [15]**HOLDER Tent 100 Editors to the AREA of each CADT
* [16]**HOLDER Tent 100 Editors to the AREA of each CADT
* [17]**HOLDER Tent 100 Editors to the AREA of each CADT
* [18]**HOLDER Tent 100 Editors to the AREA of each CADT
* [19]**PPSO TENT 100 Editors to the AREA of each CADT
* [20]**SERV_STABL Tent 500 Editors to the CADT
* [21]**SERV_STABL Tent 500 Editors to the CADT
* [22]**POPULATION Tent 500 Editors to the population used in the CADT
* [23]**The CA
|
isprs
|
MONITORING AND ASSESSMENT OF TENURE INSTRUMENTS FOR LAND ADMINISTRATION AND MANAGEMENT
|
O. T. Macapinlac, J. Templonuevo, R. J. Salazar
|
https://doi.org/10.5194/isprs-archives-xlviii-4-w8-2023-351-2024
| 2,024
|
CC-BY
|
isprs/b7ed6522_3f02_455f_a2d2_306509d28e05.md
|
Automatic co-registration of aerial imagery and untextured model data utilizing average shading gradients
[PERSON]\({}^{\text{a}}\)
[PERSON]\({}^{\text{b}}\)
[PERSON]\({}^{\text{a,b}}\)
\({}^{\text{a}}\)Fraunhofer IOSB
Video Exploitation Systems, Karlsruhe, Germany - {sylvia.schmitz, [EMAIL_ADDRESS] \({}^{\text{b}}\)Institute of Photogrammetry and Remote Sensing, Karlsruhe Institute of Technology, Karlsruhe, Germany - {martin.weinmann, [EMAIL_ADDRESS]
###### Abstract
The comparison of current image data with existing 3D model data of a scene provides an efficient method to keep models up to date. In order to transfer information between 2D and 3D data, a preliminary co-registration is necessary. In this paper, we present a concept to automatically co-register aerial imagery and untextured 3D model data. To refine a given initial camera pose, our algorithm computes dense correspondence fields using SIFT flow between gradient representations of the model and camera image, from which 2D-3D correspondences are obtained. These correspondences are then used in an iterative optimization scheme to refine the initial camera pose by minimizing the reprojection error. Since it is assumed that the model does not contain texture information, our algorithm is built up on an existing method based on Average Shading Gradients (ASG) to generate gradient images based on raw geometry information only. We apply our algorithm for the co-registering of aerial photographs to an untextured, noisy mesh model. We have investigated different magnitudes of input error and show that the proposed approach can reduce the final reprojection error to a minimum of \(1.27\,\pm 0.54\) pixels, which is less than \(10\,\%\) of its initial value. Furthermore, our evaluation shows that our approach outperforms the accuracy of a standard Iterative Closest Point (ICP) implementation.
Co-registration, Pose estimation, 2D-3D Correspondence, Average Shading Gradients, Iterative Closest Point +
Footnote †: This contribution has been peer-reviewed.
[[https://doi.org/10.5194/isprrs-archives-XULI-2-W13-581-2019](https://doi.org/10.5194/isprrs-archives-XULI-2-W13-581-2019)]([https://doi.org/10.5194/isprrs-archives-XULI-2-W13-581-2019](https://doi.org/10.5194/isprrs-archives-XULI-2-W13-581-2019)) (c) Authors 2019. CC BY 4.0 License.
## 1 Introduction
Due to technological advancements in the field of sensor technology and algorithm development for the acquisition, generation and processing of 3D data, the availability and use of 3D models has risen significantly. The advantages of 3D visualization are applied successfully in applications such as urban navigation and planning, ecological development or security surveillance. City administrations, for example, already maintain city models which are used advantageously in spatial and urban planning. A prerequisite for a reasonable use is that these models match reality as closely as possible. However, the acquisition and provision of large-scale 3D models with a high level of detail is still very expensive and time-consuming. Consequently, such models are typically only generated every few years. In order to compensate for the time gap between the acquisition and dissemination of publicly accessible 3D models, we work on augmenting the model data with up-to-date aerial imagery and perform a change detection to update the existing model.
In order to enable the transfer of information between aerial imagery and a 3D model, a preliminary co-registration is necessary. This task can be formulated as the estimation of camera parameters which describe the relative position and orientation between image and model as illustrated in Figure 1. Given point correspondences between image and model, this is a well-known task for which numerous efficient methods have already been developed (for instance proposed by [PERSON] et al. (2009), [PERSON] et al. (2013) or [PERSON] et al. (2003)). If there is no information available regarding corresponding points, the main challenge is to identify discriminative points which can be used to estimate the correspondences between the model and the image. If the model data is textured, these correspondences can be identified by exploiting image features (e.g. SIFT, SURF or ORB). However, texture of the model cannot always be relied on, as it might be outdated, of different modality, or simply missing. For this reason, we present a concept to automatically co-register aerial imagery and untextured 2.5D or 3D models, which is working with raw geometry information only.
Our algorithm is based on the concept for automatic registration of images to untextured geometry, which has been proposed by [PERSON] and [PERSON] (2017). Assuming an initial camera pose, a given untextured 2.5D or 3D model and a camera image, our algorithm estimates the intrinsic and extrinsic camera parameters based on 2D-3D correspondences between pixels in the input image and object points in the model. In order to detect reliable correspon
Figure 1: Registration task: Estimation of camera parameters, which describe the relative pose between image and model.
dences, image features of the input image are compared with features extracted on rendered views of the 3D model. A basic component of many proven feature descriptors consists of intensity gradients, which we also use in our algorithm to describe and compare image features. Since we deal with an untextured model, the calculation of texture gradients on rendered images generated from the model is not possible. To this end, we rely only on gradients of shadings by applying the so-called Average Shading Gradients (ASG) proposed by [PERSON] and [PERSON] (2017). This is a rendering technique in which observable shading gradients are averaged over all possible lighting directions of the 3D scene under the assumption of a local illumination model.
This paper is organized as follows. After presenting related work that focuses on image-to-model co-registration, we describe our algorithm in Section 3. There, we explain the generation of gradient images as well as the correspondence search and pose estimation. In Section 4, we present and discuss our dataset and the results we achieved with our approach. Finally, we provide concluding remarks and suggestions for future work in Section 5.
## 2 Related Work
Using 2D-3D correspondences to estimate the relative camera pose of a single query image with respect to a given 3D model is a widely applied approach. Similar to the concept of this paper, [PERSON] et al. (2011), [PERSON] et al. (2014), [PERSON] et al. (2009) and [PERSON] et al. (2013) determine the necessary correspondences by comparing the query image to rendered views of the model. The approach of [PERSON] et al. (2009) is based on a mapping of SIFT features extracted from the query image to SIFT features extracted from images of a database. The database used consists of already registered images as well as images of the 3D model, which were generated from different viewpoints. In a similar manner, [PERSON] et al. (2011) and [PERSON] et al. (2014) present approaches that generate images from a model to register paintings. [PERSON] et al. (2011) combine the approach of [PERSON] et al. (2009) with a matching of GIST descriptors ([PERSON] and [PERSON], 2001) to find a rough camera pose. The co-registration is then improved by matching the contours in the painting to the view-dependent contours of the model. [PERSON] et al. (2014), on the other hand, use rendered images from a model to identify features that are reliably recognizable in any 2D representation and are used for matching. The features used in the approaches mentioned are based on texture information contained in the 3D model. In contrast, this paper assumes an untextured model. In addition, [PERSON] et al. (2011) and [PERSON] et al. (2014) assume that the camera is located near the ground, while in this work we focus on the co-registration of aerial imagery.
Likewise, the task of co-registering aerial photographs to a 3D model is addressed by [PERSON] et al. (2006), [PERSON] et al. (2004) and [PERSON] et al. (2009). In ([PERSON] et al., 2004), the goal is to transfer texture information from aerial photographs to a city model. The required registration of the images is based on an assignment of projected 3D lines of the model to 2D lines, which are extracted in the aerial photograph. Using a given camera pose with an accuracy comparable to that of a Global Positioning System (GPS) and Inertial Navigation System (INS) system, an exhaustive search is performed using extrinsic and intrinsic camera parameters. [PERSON] et al. (2006) present a similar approach. Pseudo-intensity images are generated using shadows of light detection and ranging (LiDAR) data which are compared to the 2D aerial image by correlation. Similar to ([PERSON] et al., 2004), the complete camera pose is then determined by an exhaustive search with GPS information providing an initial estimate. Both methods lead to accurate registrations, but are very time-consuming. The goal to transfer information from images to a model is also pursued by [PERSON] et al. (2009). In their approach, aerial images are registered with respect to a LiDAR point cloud in order to generate a photorealistic 3D model. The registration process based on Mutual Information determines camera parameters that maximize the transformation between the distribution of image features and projected LiDAR features. Used LiDAR features arise from elevations and probability of detection, whereas image features result from illumination intensities. The camera parameters that maximize the transformation are determined using simplex methods. The use of OpenGL and graphics hardware in the optimization process leads to significantly shorter registration times compared to the methods of [PERSON] et al. (2004) and [PERSON] et al. (2006). The three methods co-register aerial photographs with respect to a LiDAR point cloud. In contrast, the aim of our work is to register an aerial photograph to a meshed 3D model, independent from the image sensor technology used to generate the model.
Lately, learning-based approaches are in particular attracting a great deal of attention. Solutions based on convolutional neural networks (CNN), proposed by [PERSON] and [PERSON] (2018), [PERSON] et al. (2015) and [PERSON] et al. (2018), are presented to solve the task of 2D/3D co-registration. In order to determine the camera pose in a given 3D environment using an RGB image, [PERSON] and [PERSON] (2018) propose a CNN that predicts 2D-3D correspondences. Hypotheses regarding the camera parameters are determined from four correspondences each. A second CNN is used to determine the most probable camera pose. [PERSON] et al. (2015) describe the training of a CNN that directly estimates extrinsic camera parameters based on an image. The required training data are generated by a Structure-from-Motion (SfM) process and used for the fine tuning of pre-trained models. [PERSON] et al. (2018) also estimate the pose directly via a CNN, proposing a new loss function based on Riemann's geometry. In contrast to learning-based approaches, this paper presents a method that does not require complex training procedures or a large amount of training data. Updating and adapting the method to other conditions, such as prior knowledge of the camera pose, is easily possible without having to train new models.
## 3 Methodology
The overall procedure to estimate the relative camera pose between an untextured model and a camera image (referred to as query image) is depicted in Figure 2. Given an untextured 3D model, an initial camera pose \(\mathrm{P}^{\mathrm{initial}}\) and a depth image generated by SfM from the camera images, our algorithm utilizes gradient representations of the input model and query image to extract features which are used in an iterative optimization scheme to refine the initial camera pose. Using coarse poses \(\mathrm{F}_{\mathrm{t}}^{\mathrm{coarse}}\) close to the initial camera pose, we render the model with ASG, yielding multiple gradient images of the input model. The query image is also transformed into a gradient representation. Thus, at the end of the first step, we have a set of gradient images \(\mathrm{I}_{\mathrm{t}}^{\mathrm{VASG}}\) extracted from the 3D model associated with coarse camera poses \(\mathrm{P}_{\mathrm{t}}^{\mathrm{coarse}}\) and one gradient image \(\mathrm{P}_{\mathrm{t}}^{\mathrm{Video}}\) corresponding to the query image. In the second stage, we compute a dense correspondence field between each \(\mathrm{I}_{\mathrm{t}}^{\mathrm{VASG}}\) and \(\mathrm{I}_{\mathrm{t}}^{\mathrm{Video}}\) using the SIFT flow algorithm ([PERSON] et al., 2011). With these 2D-3D correspondences, we employ a Direct Linear Transformation (DLT) ([PERSON] and [PERSON], 2003) within a RANSAC loop to iteratively refine each coarse camera pose \(\mathrm{P_{i}^{coarse}}\) to receive \(\mathrm{P_{i}^{fine}}\). Given the set of refined camera poses, a final verification step selects the pose \(\mathrm{P_{i}^{fine}}\) with the smallest reprojection error as the output pose \(\mathrm{P^{out}}\).
### Gradient Representations
The first step of the registration process is the computation of a set of gradient images, which represents the model from different perspectives. For this purpose, we perpetuate the initial extrinsic camera parameters with Gaussian noise to generate multiple views on the model distributed around the provided camera pose \(\mathrm{p_{i}^{initial}}\). Gradient images are generated from the model using the generated coarse camera poses \(\mathrm{P_{i}^{coarse}}\). Since the model does not contain any color or gray value information, gradients can result solely from shadings due to specific lighting of the scene. Hence, we use ASG in the same manner as in ([PERSON] and [PERSON], 2017).
A gradient image can be described by convolving the image matrix and derivative filters in the x and y directions:
\[||\
abla\mathrm{I}||=\sqrt{\left(\mathrm{h_{x}}*\mathrm{I}\right)^{2}+\left( \mathrm{h_{y}}*\mathrm{I}\right)^{2}}, \tag{1}\]
with \(\mathrm{I}\) being the image and \(\mathrm{h_{x}}\), \(\mathrm{h_{y}}\) indicating the derivative filters. Under the assumption of a Lambertian illumination model and the use of a point light source, the intensities of a rendered image \(\mathrm{I}\) can be described by:
\[\mathrm{I}=\max(0,-\mathrm{n^{\top}}\mathrm{I}). \tag{2}\]
In this formulation, the normal direction is expressed by the vector \(\mathrm{n}\) and the direction of light by the vector \(\mathrm{I}\). With predefined light direction \(\mathrm{I}\), the combination of the above equations allows the calculation of a rendered image in gradient representation. However, the assumption of a fixed light direction has considerable disadvantages. Discontinuities in the normal map, which cause shadings under an assumed light direction and thus high values in the gradient image, do not lead to observable gradients under the assumption of a different light direction. The approach of ASG counteracts this undesirable behavior. The observed gradient strengths are averaged over all feasible light directions \(\mathrm{I}\) along the unit sphere \(\mathcal{S}\). The corresponding mathematical description is:
\[\overline{||\
abla\mathrm{I}||} =\int_{\mathcal{S}}\Big{[}\left(\mathrm{h_{x}}*\max(0,-\mathrm{n^ {\top}}\mathrm{I})\right)^{2}\] \[+\left.\left(\mathrm{h_{y}}*\max(0,-\mathrm{n^{\top}}\mathrm{I}) \right)^{2}\right]^{\frac{1}{2}}\mathrm{d}\mathrm{I}. \tag{3}\]
The exact calculation of \(\overline{||\
abla\mathrm{I}||}\) is very computation-intensive due to the complex integrand. Therefore, approximations are proposed by [PERSON] and [PERSON] (2017) that allow the estimation of \(\overline{||\
abla\mathrm{I}||}\) in closed form:
\[\overline{||\
abla\mathrm{I}||} \approx\frac{1}{2}\int_{\mathcal{S}}\Big{[}\left(\mathrm{h_{x}}* \mathrm{n^{\top}}\mathrm{I}\right)^{2}+\left(\mathrm{h_{y}}*\mathrm{n^{\top} }\mathrm{I}\right)^{2}\Big{]}^{\frac{1}{2}}\mathrm{d}\mathrm{I}\] \[\leq\frac{1}{2}\sqrt{\int_{\mathcal{S}}\left(\mathrm{h_{x}}*( \mathrm{n^{\top}}\mathrm{I})\right)^{2}+\left(\mathrm{h_{y}}*(\mathrm{n^{\top} }\mathrm{I})\right)^{2}}\mathrm{d}\mathrm{I}\] \[=\frac{1}{2}\sqrt{\int_{\mathcal{S}}\left(\mathrm{(h_{x}}*\mathrm{ n})^{\top}\mathrm{I}\right)^{2}\mathrm{d}\mathrm{I}+\int_{\mathcal{S}}\left( \mathrm{(h_{y}}*\mathrm{n})^{\top}\mathrm{I}\right)^{2}}\mathrm{d}\mathrm{I}\] \[=\sqrt{\frac{\pi}{3}}\sqrt{\sum_{i=1}^{3}(\mathrm{h_{x}}*\mathrm{ n_{i}})^{2}+(\mathrm{h_{y}}*\mathrm{n_{i}})^{2}}. \tag{4}\]
Figure 2: Overview of the processing pipeline of the co-registration algorithm: Gradient images rendered from the model are matched to a gradient image of the photograph using SIFT flow. The correspondences determined are used to estimate the relative pose based on a DLT. The final pose is selected in the verification step.
The designation \(n_{i}\) indicates the x-, y- and z-component of the normal map. For the proof of the equality of the expressions in Equation (4), we refer to ([PERSON] and [PERSON], 2017). The represented form enables an efficient calculation of the desired gradient image, which is based exclusively on the convolution of the normal map with derivation filters. Figure 3 shows result images, which can be derived by means of ASG.
We use the presented approximation of ASG to render one gradient image for each generated coarse pose \(\mathrm{P}_{i}^{\mathrm{coarse}}\). Features of these images are to be matched with features of the query image that is to be registered. In order to get comparable features, the input image also needs to be converted to a representation that contains gradients induced by shadings. Applying discrete derivative operators on the single camera image, as proposed by [PERSON] and [PERSON] (2017), results in gradients which are related to ASG computed on the model to a certain degree. To obtain even more comparable gradients, we include additional camera images captured with a small spatial offset to the query image. The added images allow to compute depth information and normal vectors for the query image by means of SfM. Consequently, we can utilize ASG to transform the input image into a representation \(\mathrm{I}^{\mathrm{VI}_{\mathrm{in}}}\) equivalent to the gradient representations \(\mathrm{I}_{i}^{\mathrm{VASG}}\) of the renderings.
### Feature Matching
In order to match the rendered images to the query image, we use the SIFT flow algorithm ([PERSON] et al., 2011) computing dense flow fields between each \(\mathrm{I}_{i}^{\mathrm{VASG}}\) and \(\mathrm{I}^{\mathrm{VI}_{\mathrm{in}}}\). This algorithm works similar to the optical flow method, determining a pixel-wise shift between two images. Instead of computing correspondences by individual pixel intensities, the matching in the SIFT flow algorithm is based on the comparison of SIFT descriptors calculated for each pixel. In order to prevent the assignment of pixels from regions of homogeneous structure in the input image to empty regions in the rendering, only pixels located in textured regions are included in the calculation of the flow.
To compute correspondences, the flow vectors are first determined from the query gradient image \(\mathrm{I}^{\mathrm{VI}_{\mathrm{in}}}\) to each rendered gradient image \(\mathrm{I}_{i}^{\mathrm{VASG}}\), then from each rendered image \(\mathrm{I}_{i}^{\mathrm{VASG}}\) to the query gradient image \(\mathrm{I}^{\mathrm{VI}_{\mathrm{in}}}\). Pixel pairs connected by two opposite flow vectors are recorded as corresponding pair. The determined point correspondences are converted into 2D-3D correspondences between the input image and the model. The 3D model points are reconstructed from the appropriate points of the rendered images by considering the associated coarse camera poses \(\mathrm{P}_{i}^{\mathrm{coarse}}\).
### Pose Estimation
Given the determined 2D-3D correspondences, we improve successively each coarse pose \(\mathrm{P}_{i}^{\mathrm{coarse}}\). Assuming that enough correct correspondences have been detected, a RANSAC scheme can be used to reliably determine the relative camera pose between the query image and the model. Within the inner RANSAC loop, six 2D-3D point pairs are randomly selected from the available correspondences. From these, intrinsic and extrinsic camera parameters are determined by applying the DLT algorithm ([PERSON] and [PERSON], 2003). Subsequently, we check how many of the given correspondences support the determined camera pose and form a consensus set by computing the reprojection error for each correspondence. The camera pose corresponding to the largest consensus set represents the result and thus the improved camera pose \(\mathrm{P}_{i}^{\mathrm{fine}}\). Empirically, we have found that a good termination criterion is the computation of a consensus set that contains at least 65 % of all correspondences. If this criterion is not met, the calculation is aborted after a maximum number of 500 iterations.
### Plausibility Check
The final step of our automatic co-registration process selects the most appropriate camera pose from all the refined poses \(\mathrm{P}_{i}^{\mathrm{fine}}\). Since a correct registration is not possible in every case, we also use this step to decide on the success of the registration. For this purpose, the mutual reprojection error is calculated in pairs between all refined pose estimates. This is defined by the following equation:
\[\delta(\mathcal{P},\mathcal{P}^{\prime})= \frac{1}{2}\left(\frac{1}{|\mathcal{V}|}\sum_{\mathbf{x}\in \mathcal{V}}||\mathcal{P}(\mathbf{x})-\mathcal{P}^{\prime}(\mathbf{x})||_{2}\right.\] \[+\frac{1}{|\mathcal{V}|}\sum_{\mathbf{x}\in\mathcal{V}^{\prime}} ||\mathcal{P}(\mathbf{x})-\mathcal{P}^{\prime}(\mathbf{x})||_{2}\right). \tag{5}\]
\(\mathcal{P}\) and \(\mathcal{P}^{\prime}\) denote poses which project the model coordinates into an image plane; \(\mathcal{V}\) and \(\mathcal{V}^{\prime}\) denote the set of visible points in the image area. Thus the error measure describes the mean Euclidean distance between visible pixels. If the error for two poses is below a threshold value, which is fixed to 5 % of the longest image dimension, the considered poses are considered compatible. An undirected graph is defined by the compatibility relationship of all poses to each other. The nodes of the graph each represent a camera pose. The edges of the graph represent the compatibility between camera poses. By means of a depth-first search, the largest connected component of the graph is determined. If this contains more than three nodes, the registration process is considered successful. As the final pose \(\mathrm{P}^{\mathrm{out}}\) we select the pose from the largest connected component of the graph, for which the largest consensus set was reached in the previous RANSAC scheme.
## 4 Experiments
The quantitative evaluation of the registration procedure is based on a dataset comprising 164 aerial photographs captured with a DJI Phantom 3 Professional. These depict a free-standing building from three sides at three different heights (2 m, 8 m and 15 m). Background objects, such as trees or people, are also depicted and may differ between the images. Figure 4(a) shows a selection of the aerial photographs used. Based on all given images, a 3D point cloud is created using the Sf pipeline COLMAP
Figure 3: Gradient images created using Average Shading Gradients. (a) Normal maps of the model, (b) Resulting gradient images.
([PERSON] and [PERSON], 2016; [PERSON] et al., 2016) (Figure 4(b)). From the relevant points, i.e. those representing the building, a surface model was generated by a triangular meshing. The Poisson Surface Reconstruction method ([PERSON] et al., 2006) provided in COLMAP, was used for this purpose. A view of the mesh model is shown in Figure 4(c). The resulting model shows some errors due to the limited accuracy of the point cloud.
Aerial photographs are usually taken using unmanned aerial vehicle (UAV) systems equipped with GPS and Inertial Measurement Unit (IMU) sensors. This allows to derive a camera pose for each image, which can then be used to initialize the registration process. Since the dataset used does not contain any sensor data, we simulate the initial camera poses \(\mathrm{P}^{\mathrm{in}}\) by applying additive white Gaussian noise to the translation and rotation parameters of the ground truth poses \(\mathrm{P}^{\mathrm{GT}}\). The ground truth poses are derived from the SfM process on which the 3D reconstruction is based.
For our experiments, we have selected 50 test images of the dataset associated with initial camera poses and scaled them to a size of \(505\times 275\) pixels. When selecting the images, we took care to cover as wide a range of perspectives as possible. For each registration process of our tests, we generated and improved 15 coarse poses and automatically choose the best refined pose.
### Accuracy and Success Rate
We evaluated the presented co-registration algorithm (hereinafter referred to as ASG approach) with respect to the accuracy of the estimated camera poses and the success rate of the registration. Using the ASG approach, all eleven degrees of freedom of the desired camera pose can be determined. However, in many applications, intrinsic camera parameters are given by a preliminary sensor calibration. Under these conditions, an Efficient Perspective-n-Point (EPnP) method according to ([PERSON] et al., 2009) can be used instead of the DLT to estimate the camera pose from 2D-3D correspondences. The performance of the presented co-registration algorithm was evaluated for both initial conditions (intrinsic parameters known and unknown) using 50 selected images. During our tests, we successively increased the mean initial error, which represents the average displacement of the initial poses \(\mathrm{P}^{\mathrm{in}}\) in relation to the true poses \(\mathrm{P}^{\mathrm{GT}}\). We quantify initial errors by the mutual reprojection error as defined in Equation (5).
First we have examined how many images are rated as successfully co-registered by the automatic registration process. Figure 5 presents the corresponding results for the assumption of known and unknown intrinsic camera parameters. The results show that, under the condition of unknown intrinsic parameters, more images are accepted as successfully registered. This is due to the fact that the camera pose can be adapted more flexibly to the found correspondences due to the higher degree of freedom. However, as shown in the following section, this is at the expense of the accuracy of the estimate. It can also be seen that, as the initial error increases, the registration task becomes more demanding and some of the images are not registered. With low initial errors of about 30 pixels, 90 % and 82 % of the images are accepted within the automatic verification, whereas a medium error of about 100 pixels only results in 66 % and 54 %, respectively.
A second aspect that was evaluated is the improvement of the initial errored poses by the proposed co-registration. Again, we use the mutual reprojection error (Equation (5)), to evaluate the pose estimation. To summarize the distribution of the errors, the mean value, the standard deviation and the range of the reprojection errors are stated. The resulting error statistics for known and unknown intrinsic camera parameters are shown in Table 1 for comparison.
The following observations are made: The maximum reprojection error with respect to all registered images is 16.85 pixels. This corresponds to about 3.3 % of the largest image dimension. Even with a small initial error of about 30 pixels, this is equal to a reduction of the error by almost 50 %. From this, it can be concluded that automatic verification only accepts estimates that are more accurate than the given incorrect pose \(\mathrm{P}^{\mathrm{in}}\). Furthermore, it is shown that the average reprojection error can be reduced to up to 3.1 pixels when estimating all camera parameters. If the intrinsic parameters are already given, an average value of 1.3 pixels can be even achieved. Even with high initial errors of more than 100 pixels, good pose estimates with an average reprojection error of 3.7 or 2.5 pixels are achieved by our co-registration process. This corresponds to an improvement to 2.63 % or 1.07 %
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Initial error** & **Intrinsics** & **Accuracy** & **Error range** \\ \hline \hline \(30.19\) & unknown & \(3.10\pm 2.01\) & \(1.22-11.93\) \\ & known & \(1.27\pm 0.54\) & \(0.57-2.20\) \\ \hline \(56.90\) & unknown & \(4.10\pm 3.31\) & \(1.32-15.75\) \\ & known & \(1.54\pm 1.37\) & \(0.28-8.40\) \\ \hline \(79.53\) & unknown & \(4.39\pm 3.26\) & \(1.49-16.85\) \\ & known & \(1.81\pm 2.13\) & \(0.41-12.86\) \\ \hline \(103.83\) & unknown & \(3.70\pm 2.50\) & \(1.87-12.50\) \\ & known & \(2.51\pm 1.71\) & \(0.85-6.50\) \\ \hline \hline \end{tabular}
\end{table}
Table 1: This table holds the accuracy of our co-registration approach for various mean initial errors under known and unknown intrinsic camera parameters. The accuracy is given by the mean value and the standard deviation of the reprojection errors of registered images.
Figure 4: (a) Example images of our dataset. The point cloud generated from the images can be seen in (b).(c) Resulting mesh model.
Figure 5: The rate of success for known and unknown intrinsic camera parameters.
of the mean initial error. The large differences in the reprojection errors between the various test images, which can be seen in the large error ranges, are noticeable. The high standard deviations, which are in the same order of magnitude as the mean value itself, also indicate a strong scatter of the errors. Certain viewing perspectives of the scene are therefore more difficult to estimate than others. The examination of individual results shows: If different sides of the building are depicted on a query image, a more precise registration can usually be made than for images in which mainly one side of the building is visible. This is due to the influence of the spatial constellation of the correspondence points on the accuracy of pose estimation methods. Thus the equation system of the DLT gets singular or numerically unstable for a planar point set. The EPnP method explicitly distinguishes the planar point configuration from the non-planar case, resulting in increasing inaccuracy in the intermediate near-planar case.
Figure 7 shows sample images for which our registration algorithm fails. It can be seen that these are images where all visible points of the building have a similar distance to the camera.
In summary:
* With the automatic verification, on the test data our registration approach only provides camera poses that are more accurate than the initial camera poses.
* The average reprojection error of the camera poses is improved by the registration process up to 1.07 % of the average initial error.
* The achieved accuracies increase only slightly for high initial errors.
* With known intrinsic camera parameters, a more precise estimation of camera poses is possible.
### Comparison to an ICP Implementation
In order to rate our results obtained, we compared the accuracy and success rate of our approach to a standard Iterative Closest Point (ICP) implementation. Within the evaluation, we used the implementation of the software library Point Cloud Library (PCL) ([PERSON] and [PERSON], 2011). In order to use the ICP algorithm to estimate the relative pose between an image and a model, the input image needs to be represented as a point cloud. This is possible if depth information of the image and the intrinsic camera parameters are known. These requirements were also met for the ASG approach, which only estimates extrinsic camera parameters. In contrast to our approach, the ICP algorithm achieves an alignment of two point clouds. The relative camera pose between image and model can then be derived from the transformation required for the alignment.
For the evaluation, we used 10 images which are registered once by the ICP algorithm and once by our ASG approach. Regarding the ICP method, the registration of an image is evaluated as successful if the reprojection error under the estimated camera pose falls below a threshold value of 20 pixels. Whether a successful registration was achieved by applying the ASG approach is still decided automatically. Table 2 shows the number of successfully registered test images for the different approaches. For a small average initial error (approx. 50 pixels), the ASG approach is superior to the ICP process. If the average initial error is almost 80 pixels, the success rate is identical with 6 out of 10 images. If the initial errors are further increased, the number of registered images remains constant for the ICP method, while it drops to 3 for the ASG approach.
To compare the accuracy of the two methods, we determined the reprojection errors for all test images that can be successfully registered by both methods. Table 3 shows the corresponding error statistics (mean and standard deviation of reprojection errors) for increasing initial errors. It shows that the average error resulting from the pose estimates of the ICP method is about five times higher than that of the ASG method. Therefore, the ASG method allows to estimate camera poses closer to the true pose than the pose estimates provided by the ICP method. The poorer performance of the ICP method can partly be explained by noisy depth values of the aerial photographs. These have a negative effect on the accuracy of the point cloud, which results from the backprojection of the depth image. As mentioned before, the used point cloud of the model is also erroneous. The difficulty to adjust two noisy point clouds to each other is reflected in the high reprojection errors. ASG registration is more robust to errors in the input data. This can be explained by the fact that the geometric error minimized within the RANSAC scheme is based only on selected 2D-3D correspondences. These correspondences were previously determined using stable characteristics. In contrast, in the ICP procedure, all model points are included in the calculation of the error to be minimized. The observations of the comparison of registration procedures can be summarized as follows:
* The accuracy of the estimated camera parameters using the ASG approach exceeds one of the ICP implementation. The reprojection error is on average five times smaller.
* At given initial poses with a mean reprojection error of less than 60 pixels, more images can be registered using the ASG approach than using the ICP implementation.
* For rough pose estimates with a mean reprojection error of over 80 pixels, more images can be registered by means of the ICP implementation than by means of the ASG approach.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Initial error** & **ASG** & **ICP** \\ \hline \hline \(30.25\) & \(9\) & \(6\) \\ \(55.12\) & \(8\) & \(6\) \\ \(80.22\) & \(6\) & \(6\) \\ \(101.15\) & \(3\) & \(6\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: This table shows the number of successfully registered images using ASG and ICP approaches. A total of 10 images were used for the test.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Initial error** & **Test images** & **ASG** & **ICP** \\ \hline \hline \(30.25\) & \(6\) & \(1.12\pm 1.22\) & \(11.16\pm 4.45\) \\ \(55.12\) & \(6\) & \(2.13\pm 2.82\) & \(9.70\pm 5.60\) \\ \(80.22\) & \(4\) & \(1.89\pm 1.25\) & \(10.41\pm 4.50\) \\ \(90.51\) & \(3\) & \(2.80\pm 1.48\) & \(10.60\pm 5.84\) \\ \hline \hline \end{tabular}
\end{table}
Table 3: This table indicates the accuracies achieved using ICP and ASG approaches. The mean value and the standard deviation of the achieved reprojection errors are given. For the error determination, only test images were used which were registered successfully by both procedures.
Figure 6: Qualitative results showing the input image together with a projection of the input model given the corresponding camera pose. The input model of the building structure is projected in transparent colors, which encode the normal vectors of the model surfaces. In the first row, the input model is projected given the initial camera pose. These images clearly show the error in the camera pose with which our algorithm is initialized. The images in the second and third rows depict the results of our co-registration algorithm for known and unknown camera intrinsics. The fourth row shows the results of the ICP algorithm. The results indicate that, in most cases, our algorithm achieves a good alignment between the input model and the input image.
Figure 7: Images that cannot be registered by our algorithm.
### Qualitative Results
Figure 6 shows a selection of the results of the evaluated co-registration algorithms. It can be seen that our algorithm (second and third row) can handle large initial errors in the translation as well as minor errors in rotation. It can be observed that our proposed algorithm, based on the assumption of known intrinsic camera parameters (ASG with EPnP), gives the most accurate results. The estimation of the full camera pose (ASG with DLT) commonly yields similar results, but in some cases larger errors can be detected. For example, in the projection of the center image in Figure 6, the roof of the building disappears. Furthermore, it can be seen that registration by means of the ICP algorithm also improves the given poses \(\mathrm{P}^{\mathrm{in}}\). However, the deviations from the true poses are greater than the deviations obtained using our algorithm.
## 5 Conclusion & Future Work
In conclusion, we proposed an algorithm to automatically estimate the relative camera pose between aerial imagery and untextured 2.5D or 3D model data. To refine an initial guess of the camera pose, we compute feature-based dense correspondence fields between an aerial photograph and rendered images generated from different perspectives on the model. Since textural features are not present in the model, the compared features are derived from gradients that are based solely on the object geometry. To obtain such gradients related to the photograph as well as gradients related to the model, we use ASG, a method in which observable gradients from shadings are averaged over all possible light directions. Our evaluation shows that initial error-prone camera poses are significantly improved by our registration algorithm. Especially under the condition of calibrated camera sensors, good results are achieved. The error can thus be reduced to up to 1.07 % of its initial value. With regard to the accuracy of estimated camera poses, our presented approach exceeds the results provided by the ICP algorithm. In addition, the automatic verification of the pose estimation provides a reliable statement about the success of the registration.
In our future work we want to integrate and evaluate further suitable methods for the estimation of poses from correspondences into our method instead of DLT or EPnP. In particular, approaches that include existing depth information of the input image are of interest. In addition, we want to evaluate the the difference in performance between methods that use line features, such as those extracted with the help of ASG, and methods that rely on point features.
## References
* [PERSON] et al. (2014) [PERSON], [PERSON] and [PERSON], 2014. Painting-to-3D model alignment via discriminative visual elements. _ACM Transactions on Graphics_ 33(2), pp. 14:1-14:14.
* [PERSON] and [PERSON] (2018) [PERSON] and [PERSON] [PERSON], 2018. Learning less is more -6D camera localization via 3D surface regression. In _Proc. IEEE Conference on Computer Vision and Pattern Recognition_, pp. 4654-4662.
* [PERSON] et al. (2004) [PERSON], [PERSON] and [PERSON], 2004. Automated texture mapping of 3D city models with oblique aerial imagery. In _Proc. International Symposium on 3D Data Processing, Visualization and Transmission_, pp. 396-403.
* [PERSON] et al. (2003) [PERSON], [PERSON], [PERSON] and [PERSON], 2003. Complete solution classification for the perspective-three-point problem. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ 25(8), pp. 930-943.
* [PERSON] and [PERSON] (2003) [PERSON] and [PERSON], 2003. _Multiple View Geometry In Computer Vision_. Cambridge University Press.
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], 2018. Deep pose estimation for image-based registration. _Medical Imaging with Deep Learning_.
* [PERSON] et al. (2009) [PERSON], [PERSON], [PERSON] and [PERSON], 2009. From structure-from-motion point clouds to fast location recognition. In _Proc. IEEE Conference on Computer Vision and Pattern Recognition_, pp. 2599-2606.
* [PERSON] et al. (2006) [PERSON], [PERSON] and [PERSON], 2006. Poisson surface reconstruction. In: _Proc. Eurographics Symposium on Geometry Process-A_, pp. 1-10.
* [PERSON] et al. (2015) [PERSON], [PERSON] and [PERSON], 2015. PoseNet: A convolutional network for real-time 6-DOF camera relocalization. In: _Proc. IEEE Conference on Computer Vision and Pattern Recognition_, pp. 2938-2946.
* [PERSON] et al. (2009) [PERSON], [PERSON] and [PERSON], 2009. EPnP: An accurate (On) solution to the pnp problem. _International Journal of Computer Vision_ 81(2), pp. 155-166.
* [PERSON] et al. (2011) [PERSON], [PERSON] and [PERSON], 2011. SIFT-Flow: Dense correspondence across scenes and its applications. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ 33(5), pp. 978-994.
* [PERSON] et al. (2009) [PERSON], [PERSON] and [PERSON], 2009. Automatic registration of LiDAR optical images of urban scenes. In: _Proc. IEEE Conference on Computer Vision and Pattern Recognition_, pp. 2639-2646.
* [PERSON] and [PERSON] (2001) [PERSON] and [PERSON], 2001. Modeling the shape of the scene: A holistic representation of the spatial envelope. _International Journal of Computer Vision_ 42(3), pp. 145-175.
* [PERSON] et al. (2013) [PERSON] [PERSON], [PERSON] [PERSON] and [PERSON] [PERSON], 2013. Exhaustive linearization for robust camera pose and focal length estimation. _IEEE Transactions on Pattern Analysis and Machine Intelligence_ 35(10), pp. 2387-2400.
* [PERSON] and [PERSON] (2017) [PERSON] and [PERSON], 2017. Automatic registration of images to untextured geometry using average shading gradients. _International Journal of Computer Vision_ 125(1-3), pp. 65-81.
* [PERSON] et al. (2011) [PERSON], [PERSON], [PERSON] and [PERSON], 2011. Automatic alignment of paintings and photographs depicting a 3D scene. In: _Proc. IEEE International Conference on Computer Vision Workshops_, pp. 545-552.
* [PERSON] and [PERSON] (2011) [PERSON] and [PERSON], 2011. 3D is here: Point Cloud Library (PCL). In: _Proc. IEEE International Conference on Robotics and Automation_, pp. 1-4.
* [PERSON] and [PERSON] (2016) [PERSON] and [PERSON], 2016. Structure-from-motion revisited. In: _Proc. IEEE Conference on Computer Vision and Pattern Recognition_, pp. 4104-4113.
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON] and [PERSON], 2016. Pixelwise view selection for unstructured multi-view stereo. In: _Proc. European Conference on Computer Vision_, pp. 501-518.
* [PERSON] et al. (2013) [PERSON], [PERSON], [PERSON] and [PERSON], 2013. SIFT-realistic rendering. In: _Proc. IEEE International Conference on 3D Vision_, pp. 56-63.
* [PERSON] et al. (2006) [PERSON], [PERSON], [PERSON] and [PERSON], 2006. Automatic alignment of color imagery onto 3D laser radar data. In: _Proc. IEEE Conference on Applied Imagery and Pattern Recognition Workshops_, pp. 6-12.
* [PERSON] et al. (2016)
|
isprs
|
AUTOMATIC CO-REGISTRATION OF AERIAL IMAGERY AND UNTEXTURED MODEL DATA UTILIZING AVERAGE SHADING GRADIENTS
|
S. Schmitz, M. Weinmann, B. Ruf
|
https://doi.org/10.5194/isprs-archives-xlii-2-w13-581-2019
| 2,019
|
CC-BY
|
isprs/3b51d1e3_11aa_444f_bd46_7cf443a8930f.md
|
# Urban Heat Island Micro-Mapping via 3D City Model
[PERSON]\({}^{1,*}\)
[PERSON]\({}^{1}\)
[PERSON]\({}^{1}\)
[PERSON]\({}^{1}\)
[PERSON]\({}^{1}\)
\({}^{1}\) Dept. of Geoinformation, Faculty of Built Environment and Survey, Universiti Teknologi Malaysia, Malaysia - (mdzmir, suhaibab, mzhir, alias, tlchoon)@utm.my
###### Abstract
Urban Heat Island (UHI) phenomenon has been a topic of intense study over the past several years. However, to visualise UHI model is still an issue. Common visualisation of UHI by using digital thematic maps shows that it is hard to perceive its impacts especially in a sophisticated micro-area such as in urbanized cities. Moreover, different building facade's material gives different UHI value. Therefore, there is a need in computing and visualising this phenomenon in three-dimensional (3D) perspectives. Recently, the development of 3D city modelling shows the potential of solving these gaps. This can be seen from the characteristics of 3D city models that are suitable in representing micro-area (complex cities) for UHI studies. Based on this issue, this research aims to produce a 3D UHI model by using 3D city models as a tool for efficient and sustainable building design. The main objective is to produce a new approach in visualising UHI in 3D perspectives by instigating 3D city models. Thus, the UHI effect could be predicted precisely by calculating the building facades value. This research explores the 3D shadow analysis, 3D solar radiation and 3D orientation analysis in UHI modelling via 3D city models. From the analyses, the results show that the 3D city models are capable in presenting the solar radiation value for each building facade. Furthermore, this approach can be used to simulate future UHI analysis-prediction and advantageous for pre-development planning.
Urban Heat Island, 3D City Models, 3D Solar Analysis +
Footnote †: This contribution has been peer-reviewed.
## 1 Introduction
Urban Heat Island (UHI) is an increase of temperature in urban area that generates temperature differences between urban and rural area. Due to the replacement of vegetation with concrete, roads, buildings and other man made structure in urban areas, the temperature rise slightly higher compared with suburbs or rural areas with more trees or shrubs. This is due to this man-made structures absorb the sun's heat, causing surface temperatures and overall ambient temperatures to rise ([PERSON] et al., 2015; [PERSON] et al., 2018; [PERSON] et al., 2015). Vegetation provides shading and evaporation from soil and leaves. Whereby, this creates natural cooling effect to the surrounding area. However, tall buildings and narrow streets cause heat air trapped between them and reduces the air flow. Furthermore, the waste heat from vehicles, factories, and air conditioners may add heat to the surroundings, further worsening the urban heat island effect ([PERSON] et al., 2016; [PERSON] et al., 2017).
### Research Motivation
The Urban Heat Island (UHI) phenomenon has been the topic of intense study over the past several years. There are several main causes determined in formation of UHI and several mitigation strategies were planning to reduce it. As for an example, Lawrence Berkeley National Laboratory (LBNL) published one of the first guidebooks titled \"Reducing Urban Heat Island: Compendium Strategies - Cool Roofs\" for urban heat island mitigation ([PERSON], 2009). This book describes the causes and effect of UHI and promotes strategy for lowering temperature in United State. This guidebook emphasizes cool roof as one way to reduce the UHI in urban area. This shows that UHI is a serious problem need to be highlighted.
According to [PERSON] (2012), the heat island impact can be reduced by producing a proper urban design ([PERSON], 2012). He also state the use of sufficient and properly spaced parks and the use of high reflectivity, low heat capacity and low heat conductivity building materials can weaken the heat island. On the other hand, dense concentrations of materials like asphalt and buildings absorb more heat during the day and release it more slowly at night compared with natural ground cover such as soil and vegetation ([PERSON] et al., 2016). Based on these, it can be seen that building design is one of the main factors that can contribute to UHI phenomenon.
### Background Problem
One of the factors that contribute to the rise of temperature in city area is the urban architectural design. The configuration of building in the city that is located close to each other will trap the radiation within the city ([PERSON], 2012). Multiple reflections between buildings and urban surfaces occur before the solar radiation is reflected to the space due to building height and cluster pattern. In addition, the size and height of buildings will reduce the sky view of the surface and hence limit the emission of thermal radiation to space. The wind flow or turbulent transports are also distracted by the overlapping buildings which are pervious surface too. Moreover, the building should not be painted with dark colour because of heat absorbance plus with the building exposure to sunlight for a long time during the day ([PERSON], 2012). So, the architectural design of a building are really important and need to be considered as it can gives an impact towards urban temperature.
Urban surface such as pavement and buildings reduce the natural surface like vegetation, crop and soil. The thermal properties of these two surfaces are significantly different. Urban fabric has much higher thermal capacity, thermal conductivity and albedo. Most of the buildings in the city aremade up of concrete, glass, brick and asphalt. These types of materials can reflect some energy back to space but are also good at storing heat. As a consequence, urban surface absorb a huge amount of heat during daytime and slowly re-emit the stored heat during the late afternoon and night ([PERSON], 2012). These conclude that urban fabrics can influence the temperature rise in the urban area.
On the other hand, [PERSON] (2007) stated that most of the UHI maps available today are of 2D variety ([PERSON] and [PERSON], 2007). In this context, urban heat island mapping in 2D conventional map is not practical in mapping building which has height and complex shape. The 2D map only represents coordinates (x, y) but in planning the urban building design map we require height (2) value. Besides, 2D map are too general and might be not accurate as it not show the detail design of the building and other features.
Recently, development in 3D city modelling gives new insight in future development planning. 3D city models can be defined as digital models of urban areas that signify terrain surfaces, sites, buildings, vegetation, infrastructure, landscape elements and features belonging to urban areas. It is really a reliable tool to be use in studying the urban heat island phenomenon as it has a few advantages compared to 2D conventional map. 3D modelling helps to visualize and understand elevation differences and topographic features ([PERSON] and [PERSON], 2007). In building design context, 3D models can be view in the aspect of building height, structure configuration, materials and its colour than 2D map. Therefore, 3D city modelling can be a powerful tool to study the UHI phenomenon that is related to architectural building design and other urban heat island influence factor.
## 2 UHI and Urban Design
Figure 1 shows the connection between UHI and urban building design. It is proven that urban forms affect urban microclimate ([PERSON], 1998), and these changes in the urban environment will result in building energy consumption ([PERSON], 2013). As [PERSON] (1982) argued, urban climatology can become a more predictive science in which findings can be of direct value in urban planning and design ([PERSON], 1982).
([PERSON], 1999) proposed that by examining the relationship between urban forms and climate, one could employ the results of urban climatology into urban design guidelines. Therefore, for mitigating UHI effect by urban design, primary work on the research of urban forms is to systematically collect and comb all urban form factors that may impact building energy consumption.
### 3D City Model and UHI
There is no doubt that the UHI is a growing problem in built-up environments, due to energy retention by the surface materials of dense buildings, leading to increased temperatures, air pollution, and energy consumption. To apprehend UHI, 3D information of the urban area surroundings is essential in order to analyse complex sites, including dense building clusters. 3D building geometry information can be combined with 2D urban surface information and it can examine the relationship between urban characteristics and temperature.
Building materials, such as concrete and asphalt, absorb thermal energy during daytime and release it during night-time cause the temperatures in dense building areas higher than in surrounding suburban and rural areas ([PERSON] et al., 2016). Heat energy stored by complex urban structures are one of the core reasons of high surface temperatures is generated. The UHI adversely impacts urban populations, by inducing heat stresses and health problem, and worsening air quality through the formation of tropospheric ozone.
To improve our understanding on how urban characteristics influence the surface temperatures, we can analyse how much heat absorbs by the building using 3D model. To model this relationship, information such as materials of the building is being use for solar absorption analysis. Besides, geographic information which determines the location of the building can be used for orientation analysis.
## 3 Methodology
Three analyses were conducted; shadow analysis, solar absorption analysis and orientation analysis. These analyses are described briefly in the next subsections.
Figure 1: Connection between UHI and urban building design
### Shadow Analysis
In this research, the study case used was buildings located in Precinct 3, Putrajaya, Malaysia. Putrajaya is officially the Federal Territory of Putrajaya, a planned city and the federal administrative centre of Malaysia. First of all, the shadow analysis should be done prior to other analyses mentioned in this paper. This is due to understand how the sun travel across the study area and finding which shadows are cast from surrounding objects and building. With these basic understanding, effective result could be gain in the analyses of solar absorption and orientation analysis.
Based on Figure 2, it can be seen that this analysis emphases on the sun's comportment over time. It shows where the shadow is at particular time (hour). Besides, shadow range can be used to see the pattern flow of the shadow. The range outlines being set at hourly interval over a day. This produces a pattern of shadows running from west to east. The result is helpful to visualize the full effect of the building on its neighbouring building.
### Solar Absorption Analysis
This analysis will help to know in detail on how much the incident solar radiation will actually absorb by the building. These calculations do take into account material properties as each material has their own characteristic. To get accurate results, these properties must be ensuring definitely correct. For better view and understanding, this paper focuses on the Palace of Justice (POJ) building located in Precinct 3, Putrajaya (Figure 3) for this analysis.
In this study, solar absorption analysis use hourly recorded direct and diffuse radiation data from the climate file (as illustrated in Figure 4). As for this study, climate file use is Kuala Lumpur, Malaysia and coordinate system use is WCS 84. Time range use for this analysis is from 8:00 a.m. to 5:00 p.m. where will include the time when the sun is at its highest point in the sky at noon local time around 1:00 p.m. The time when the temperature are actually peak is around 3:00 p.m.
### Orientation Analysis
Orientation analysis is useful in determining the optimum orientation angle of a building which can be used in city planning. If proposed building are oriented at an angle that is perpendicular to the prevailing wind, during intense UHI events this will reduce the chances of air circulation of the urban canyon and removal of heat and pollutants that accumulate between buildings.
This analysis gives information about orientation angle based on solar radiation and sun path. Based on this analysis, the building can be designed according to the produced result. This will gives a solar radiation at minimum amount.
## 4 Results and Analysis
### Shadow Analysis
The height of a building often gives a problem in a way that it blocks the other building from receiving the sunlight ([PERSON] and [PERSON], 2016). This problem can be visualized using shadow analysis. By conducting the shadow analysis, we can see which building were block by other building at a certain time. The shadow range for a particular range time can be display. Besides, animate shadows are used to see the changes of shadow from time to time.
Shadows are visualized based on the sun's position. This is often a good starting point. It is important to see the effect of the building towards the surrounding building. As for examples, if a building is blocking another neighbouring building or the building were blocked by other buildings. Figure 5 shows the range for shadow analysis from 8:00 a.m. to 5:00 p.m. with hourly interval. In brief, this analysis shows the area of shadows of a particular area on a horizontal plane over time.
Figure 4: Solar absorption analysis method
Figure 3: Palace of Justice building
Figure 2: Shadow analysis method
### Solar Radiation Analysis
Solar absorptance is a measure of the proportion of solar radiation a body absorbs. The higher the solar absorptance range, the more energy will be absorbed and body with high solar absorptance will reach higher temperature than one with lower solar absorptance. Energy that is absorbed is emitted by radiation and convection from all surfaces.
#### 4.2.1 Building materials
In solar absorption analysis, besides the climate file and coordinate of the building, the building materials should be consider. The reason is that each type of materials has different capacity to absorb and store heat. In this research, the data for solar absorption is converted to GIS ready format (GIS database).
In this study, the heat absorb by the building are calculated for the duration of 8.00 a.m. to 5.00 p.m. Palace of Justice building is built from various type of materials which are plaster, heat retention foil, ceramic tile, concrete, asphalt and block plaster that have different solar absorption capacity. The solar absorption range is from 0 to 1 which 0 indicates the lowest solar absorption while 1 indicates the highest solar absorption.
By using Ecotect software, the solar absorbed by Palace of Justice building are shown as in Figure 6. The figure shows the simulation of amount of solar absorb by each type of materials. Intense yellow values indicate warmer surface areas value. The total amount of solar absorbed seems identical for each material although there are some differences between them. Based on the observation, the similar amount of solar absorbed are caused by the equilibrium that reach between the materials as the time for this analysis was set from 8.00 a.m. to 5.00 p.m. which high amount of solar are received at that time.
The conductivity and reflectivity of the materials do affect the solar absorption. Each material used has fundamental physical properties that determine their energy performance like conductivity, resistance, and thermal mass. For solar absorption analysis, Palace of Justice building was used to determine the heat absorption from 8.00 a.m. to 5.00 p.m. Table 1 shows the materials properties for the building.
#### 4.2.2 GIS format ready
The analysis result derived from Ecotect software can exported into a spreadsheet format. For further analysis, the data should be in GIS ready format. This will relate the information in geospatial database with the 3D model developed in ArcGIS.
The procedure for importing the results began with a manual extraction from the Ecotect result sets of solar absorption for the one month of simulation (Figure 7). Although the model generates a variety of outputs, such as total incident radiation and total transmitted radiation, only solar absorption have been used in geospatial database format (Figure 8).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{**Hot**} & \multicolumn{2}{c|}{**Absorption**} \\ \hline
**GC** & **GC**/**TC** & **G** & **G**berd** & **1** & **G**berd** & **1** & **G**berd** & **1** & **Total** **H**oo** \\ \hline
1 & 2 & 1 & Rot & 111.30 & -8/17.42 & 4098.54 & 30773.34 \\ \hline
2 & 3 & 1 & Rot & 111.30 & -8/17.48 & 4092.52 & 34727.14 \\ \hline
3 & 4 & 2 & Rot & 111.30 & -8/2.54 & 4052.04 & 3562.04 \\ \hline
4 & 5 & 3 & Rot & 111.30 & -8/2.54 & 452.04 & 3225.89 \\ \hline
5 & 6 & 4 & Rot & -40.61 & -3/2.22 & 4486.21 & 12086.25 \\ \hline \end{tabular}
\end{table}
Table 1: Materials properties of Palace of Justice building
Figure 8: The information in geospatial database format
Figure 7: Information of total absorbed radiation generate from Ecotect software in spreadsheet format
Figure 6: Solar absorb by Palace of Justice buildingFigure 9 shows the simulation of the solar absorption received by the building in ArcScene software. The roof, dome and wall has different range amount of heat absorbed. The roof absorbed the highest amount of energy followed by the dome and wall.
### Orientation Analysis
Another important aspect that affects building performance is the building orientation. An effective passive solar heating design assumes that the building is orientated to receive as much solar radiation as possible when heating is required, whilst rejecting as much as possible when it is not.
To derive the most effective orientation, the Weather Tool calculates the amount of solar radiation incident on a 1m\({}^{2}\) vertical surfaces for each 5\({}^{\circ}\) of orientation angle. Three values are stored for each angle, the average daily radiation taken over the whole year, over the coldest 3 months and over the warmest 3 months. These three values can then be plotted on a polar graph where the radius of any point from the centre represents the incident radiation value. Figure 10 shows an example of building orientation located at Kuala Lumpur, Malaysia.
The incident radiation on the east face in the morning and the west face in the evening is much greater than that on the north face during the middle of the day.
Figure 11 shows the original position of the building while Figure 12 shows the position after alteration according to the best orientation analysis that been computed. The difference can be seen as the heat absorbed by the alter building is less than the original. This is discussed in the next sub-section.
### Comparison of Solar Radiation Absorptance - After Orientation Change
It is possible to lessen the solar radiation absorb by the building by changing the orientation. The orientation can help optimize the sunlight received. ArcScene 10.3 was used to simulate the heat absorbed by the building. Graduated colours from green to red were used to view the pattern of the heat absorbed.
Figure 13 shows the heat absorbed in original position of the building. The minimum heat absorbed by the building is 470 W/m\({}^{2}\) and the maximum heat absorbed is 353210 W/m\({}^{2}\). Meanwhile, the mean of heat absorbed are 49 135 W/m\({}^{2}\). The roof, which are made up of concrete and asphalt absorbed the highest amount of sunlight that is 353210 W/m\({}^{2}\). It has the highest range of solar absorption capacity compared to other type of materials.
Figure 11: The original position of POJ building
Figure 10: Optimum orientation angles based on solar radiation received in the coldest 3 months (blue), the warmest 3 months (red) and over the entire year (green)
Figure 9: The simulation of solar absorption viewed in ArcScene
Figure 16 shows the heat absorbed in after the angle being changed by 37.5. The minimum heat absorbed by the building is 3276 W/m2 and the maximum heat absorbed is 352910 W/m2. The mean of heat absorbed are 49075 W/m2.
The heat absorbed at original orientation has low minimum value compared to heat absorbed at new orientation. However, its maximum value is higher than new orientation building. The mean for original position and altered position are 49135 W/m2 and 49075 W/m2 respectively. The mean of both analyses only has 60 W/m2 difference.
There are only slightly change in solar absorption after the orientation of the building are changed. However, suggestions such as redesign the surrounding landscape by having more trees or plants will contribute to more changes in the building solar absorption. However, the new orientation still absorbs lower solar radiation than the original orientation and this might be different and can be studied for other types of climates.
## 5 Conclusion
From the research and analysis done, height material properties and orientation are the most important part for assessing sustainable buildings. This will determine the most optimum solar radiation for a building. The overall assessment shows that the height of the building should be suitable with the neighbouring building. Furthermore, the material properties should be considered carefully in the planning stage in order to reduce the UHI effect. The material must have low heat capacity, low heat conductivity and high reflectivity of solar radiation. The 3D city model has proved to be a practicable tool in visualising the UHI problems and great for sustainable city planning.
## Acknowledgements
This work is partially supported by UTM Research University Grants, Vote Q.J130000.2527.11H78 and Vote Q.J130000.2527.15H49.
## References
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON], [PERSON], 2015. A review on natural ventilation applications through building facade components and ventilation openings in tropical climates. Energy and Buildings 101, 153-162.
* [PERSON] (2009) [PERSON], 2009. Cooling our communities. A guidebook on tree planting and light-colored surfacing.
* [PERSON] et al. (2016) [PERSON] [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], 2016. Energy savings in buildings or UHI mitigation? Comparison between green roofs and cool roofs. Energy and buildings 114, 247-255.
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], 2017. Monitoring and assessment of urban heat islands over the Southern region of Cairo Governato, Egypt. The Egyptian Journal of Remote Sensing and Space Science.
* [PERSON] (2012) [PERSON], 2012. Mitigation of the urban heat island of the city of Kuala Lumpur, Malaysia. Middle-East Journal of Scientific Research 11, 1602-1613.
* [PERSON] (1998) [PERSON], 1998. Climate considerations in building and urban design. John Wiley & Sons.
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], 2018. On the assessment of urban heat island phenomenon and its effects on building energy performance: A case study of Rome (Italy). Energy and Buildings 158, 605-615.
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON], 2016. Energy performance of building envelopes integrated with phase change materials for cooling load reduction in tropical Singapore. Applied Energy 162, 207-217.
* [PERSON] (1999) [PERSON], 1999. Urban climatology and urban design. ICBC-ICUC 99, 15 th.
* [PERSON] (1982) [PERSON], 1982. The energetic basis of the urban heat island. Quarterly Journal of the Royal Meteorological Society 108, 1-24.
* [PERSON] (2013) [PERSON], 2013. Energy and climate in the urban built environment. Routledge.
Figure 14: The simulation of heat absorb after alter the position
Figure 12: The position of POJ building after alteration of 37.5°
Figure 13: The simulation of heat absorb in original position
[PERSON], [PERSON], 2007. Evaluating the effectiveness of 2d vs. 3d trailhead maps, Proc. 6 th ICA Mountain Cartography Workshop, p. 201.
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], 2015. Breakdown of the night time urban heat island energy budget. Building and environment 83, 50-64.
* [PERSON] and [PERSON] (2016) [PERSON], [PERSON], 2016. Sunlight availability and potential food and energy self-sufficiency in tropical generic residential districts. Solar Energy 139, 757-769.
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON], 2016. Micrometeorological simulations to predict the impacts of heat mitigation strategies on pedestrian thermal comfort in a Los Angeles neighborhood. Environmental Research Letters 11, 024003.
|
isprs
|
URBAN HEAT ISLAND MICRO-MAPPING VIA 3D CITY MODEL
|
U. Ujang, S. Azri, M. Zahir, A. Abdul Rahman, T. L. Choon
|
https://doi.org/10.5194/isprs-archives-xlii-4-w10-201-2018
| 2,018
|
CC-BY
|
isprs/c6305784_5555_4677_a60a_6246a271f24e.md
|
# Pre-Classification of Points and Segmentation of Urban Objects
by Scan Line Analysis of Airborne LiDAR Data
[PERSON]\({}^{\text{a}}\)
[PERSON]\({}^{\text{b}}\)
###### Abstract
Currently available laser scanners are capable of performing hundreds of thousands of range measurements per second at a range resolution of a few centimeters. Despite that increasing performance, up to now airborne LiDAR mapping has established itself only in fields of application that do not require on-line data processing. A typical example in this context is urban modeling. We want to point out some other tasks like object recognition, situation analysis and on-line change detection that have come into reach of today's LiDAR technology. Primary goal of the work presented in this paper is the on-line data preparation for subsequent analysis. We present a workflow of real-time capable filter operations to detect the ground level and distinguish between clutter and man-made objects in airborne laser scanner data of urban regions. Based on interpretation of single scan lines, straight line segments are first segmented and then connected, and the resulting surfaces are delineated by a polygon. A preliminary step is done towards fast reconstruction of buildings for rapid city-modeling, co-registration or recognition of urban structures.
Laser scanning, LiDAR, airborne remote sensing, on-line processing, classification, segmentation, urban data
## 1 Introduction
### Problem description
Airborne laser scanning (ALS) of urban regions is nowadays commonly used as a basis for 3D city modeling. Typical applications lie in the fields of city planning, tourism, telecommunication, architecture, archeology and environmental protection. A good overview and a thorough description of ALS principles can be found in ([PERSON], 1999). Laser scanning has several advantages compared to classical aerial photography. It delivers direct 3D measurements independently from natural lighting conditions, and it offers high accuracy and point density.
Despite increasing performance of LiDAR systems, most remote sensing tasks that require on-line data processing are still accomplished by the use of conventional CCD or infrared cameras. Typical examples are airborne monitoring and observation devices that are used for automatic object recognition, situation analysis or real-time change detection. Looking at urban regions, these sensors can support law enforcement, firefighting, disaster management, and medical or other emergency services. At the same time it is often desirable to assist pilots with automatic aircraft guidance in case of poor visibility conditions. Three-dimensional information as provided by the LiDAR sensor technology would ease these tasks, but in many cases the complexity of irregularly distributed laser point clouds prevents an on-line data processing.
In another aspect, on-line pre-processing and reducing the ALS data to the essential information are important for efficient data storage and data transfer in a sensor network. Additionally, when combining different data sets, e.g. showing the same urban region in oblique view from different directions, the pairwise co-registration is even more accurate when dealing with structures of higher order like surfaces instead of the original point clouds ([PERSON] & [PERSON], 2005).
In the classical workflow of ALS data processing e.g. for city modelling, the first step is to register all the collected data by using navigational sensors (INS and GPS), resulting in an irregularly distributed 3D point cloud. Automatic processing of these data is quite complex since it is necessary to determine a set of nearest neighbors for each data point to handle search operations within the data set. The common technique for that is the generation of a triangulated irregular network (TIN). This approach leads to most accurate results, but it is not applicable for real-time applications.
### Contribution
Most of currently used airborne laser scanners like the RIEGL LMS-Q560 utilize opto-mechanical beam scanning to measure range values in single scan lines. The third dimension is provided by the moving airborne platform. This paper aims at fast pre-classification of LiDAR points and segmentation of buildings in urban environments based on the analysis of these scan lines. Instead of initial georeferencing of all range measurements, the analysis of geometric features in the respective local neighborhood of each data point is performed directly on the 2D regularly distributed scan line data. These operations can be executed comparatively fast and are applicable for online data processing. This paper presents a workflow of real-time capable operations to detect the ground level and distinguish between clutter and man-made objects in airborne laser scanner data of urban regions. A preliminary step is done towards the fast reconstruction of buildings including their facades for rapid city-modeling or object recognition. The detected rooftops can even be used for a fast co-registration of different views of the same urban region. If an accurate city model is already available, the proposed methods could be used for a structural comparison, e.g. for change detection.
### Related work
Processing of laser scanner point clouds for automatic filtering, segmentation, classification and modeling of structures has been thoroughly studied by many researchers in recent years. In some parts our work follows or extends the ideas presented in other articles that are especially mentioned in this section. According to ([PERSON] et al., 2004), two major classes of segmentation algorithms can be pointed out: surface growing and scan line segmentation. Pointwise growing of planar faces in combination with a three-dimensional Hough transform as described by [PERSON] and [PERSON] (2001) can be used to reconstruct accurate 3D building models, but since it needs a TIN structure this approach is suited only for off-line data processing.
Fundamental ideas on fast segmentation of range images into planar regions based on scan line analysis have been published by [PERSON] and [PERSON] (1994, 1999). Their algorithm first divides each row of a range image into straight line segments, and then it performs a region growing process with these line segments. Since this is the most obvious way of proceeding, we basically adapted that approach to work with our data. Instead of range images we have to deal with continuously recorded scan lines that are not necessarily parallel. [PERSON] and [PERSON] originally used the splitting algorithm of [PERSON] and [PERSON]. [PERSON] (1999) described a classification of points in a scan line based on the second derivatives of the elevation difference. In contrast our filtering of straight line segments is based on a robust estimation technique (RANSAC). This was also proposed by [PERSON] and [PERSON] (2003) for the segmentation of roads in regularized ALS grid data. Since our data is not regularized this way, we had to implement a different method to merge straight line segments into surfaces. [PERSON] and [PERSON] (2003, 2005) used scan line based methods for structure detection in point clouds and filtering of ALS data, but they refer to the term \"scan line\" in a different manner. Instead of processing hardware generated scan lines like we do, they define scan lines with multiple orientations by slicing the point cloud and connecting 3D points based on height difference and slope.
From an application-oriented point of view, only few articles have yet been published on on-line processing of laser scanner data. Examples are automatic uploading of piled box-like objects ([PERSON] & [PERSON], 2004) or the exploitation of LiDAR data to obtain traffic flow estimates, described by [PERSON] et al. (2004).
## 2 Experimental setup
The sensors that are briefly described here have been attached to a Bell UH1-D helicopter to acquire the data shown in this paper.
### Navigational sensors
The APPLANIX POS AV comprises a GPS receiver and a gyro-based inertial measurement unit (IMU), which is the core element of the navigational system. The GPS data are used for drift compensation and geo-referencing, whereas the IMU determines accelerations with high precision. These data are transferred to the position and orientation computing system (PCS), where they are fused by a Kalman filter, resulting in position and orientation estimates for the sensor platform.
### Laser Scanner
The RIEGL LMS-Q560 is a laser scanner that gives access to the full waveform by digitizing the echo signal. The sensor makes use of the time-of-flight distance measurement principle with nanosecond infrared pulses. Opto-mechanical beam scanning provides single scan lines, where each measured distance can be geo-referenced according to the position orientation of the sensor. Waveform analysis can contribute intensity and pulse-width as additional features, which is typically done by fitting Gaussian functions to the waveforms. Since we are mainly interested in fast processing of the range measurements, we neglect full waveform analysis throughout this paper. Range \(d\) (expressed in meter) under scan angle \(a\) (-30 degree to +30 degree) is estimated corresponding to the first returning echo pulse as it can be found by a constant fraction discriminator (CFD). Positions with nonce or multiple returns are discarded, and with _x-d-sin(a)_, _y-d cos(a)_ the scan line data is given in 2D Cartesian coordinates. Since the rotating polygon mirror operates with 1000 scan positions, one scan line can successively be stored in an array \(A\) of maximum size _(1000, 2)_. In praxis, navigational data assigned to each range measurement need also to be stored in that array for later georeferencing. Figure 1 illustrates the process of data acquisition and exemplarily shows a scan line measured at an urban area. Buildings appear to be upside down in that representation, because the rooftops are nearer to the sensor than ground level.
## 3 Used methods and data processing
Most parts of typical buildings will appear as local straight line segments within the scan line data, even if the airborne laser scanner is used in oblique configuration to obtain information concerning the facades of buildings. Our method is intended to filter the data points in each scan line to keep only those points that are most promising to represent parts of buildings. Consequently, points at ground level and those belonging to objects with an irregular shape like trees or other vegetation are removed. To distinguish between clutter and man-made objects, the RANSAC technique is used to fit straight line segments to the scan line data.
### Random sample consensus (RANSAC)
The random-sample-consensus paradigm (RANSAC) as described by [PERSON] and [PERSON] (1981) is a standard technique to estimate parameters of a mathematical model underlying a set of observed data. It is particularly used in case that the observed data contain data points which can be explained by a set of model parameters (inliers) and such data points that do not fit the model (outliers). To apply the RANSAC scheme, a procedural method has to be available that determines the
Figure 1: Illustration of data acquisition (top) and exemplary scan line (bottom).
parameters to fit the model to a minimal subset of the data. In this paper we use RANSAC to fit line segments to subsets of 2D points in a scan line. If we have a set of \(n\) points \(\{\mathbf{p}_{i},\,\ldots,\mathbf{p}_{n}\}\) and we assume that this set mostly contains points that approximately lie on straight line (liniers) and some others that do not (outliers), simple least squares model fitting would lead to poor results because the outliers would affect the estimated parameters. RANSAC estimates a straight line only by taking the inliers into account, provided that the probability of choosing only inliers among the data points is sufficiently high. To compute a straight line, a random sample of two points (the minimal subset) \(\mathbf{p}_{i}\) and \(\mathbf{p}_{j}\) is selected. The resultant line's normal vector \(\mathbf{n}_{\mathbf{\theta}}\) can easily be computed by interchanging the two coordinates of \((\mathbf{p}_{i}-\mathbf{p}_{j})\), altering the sign of one component and normalizing the vector to unit length. This yields the normal vector \(\mathbf{n}_{\mathbf{\theta}}\) and with \((\mathbf{x}-\mathbf{p}_{i})\mathbf{n}_{\mathbf{\theta}}=0\) the line's Hessian normal form is given. Given this representation it is easy to check any other point \(\mathbf{p}\) if it is an inlier or outlier simply by computing the distance \(d=|\langle\mathbf{p}-\mathbf{p}_{i}\rangle\mathbf{n}_{\mathbf{\theta}}|\) to the previously obtained line. If the distance \(d\) is below a pre-defined threshold, we assess that point as inlier. The number of inliers and the average distance of all inliers to the line are used to evaluate the quality of the fitted straight line. This procedure is repeated several times in order to converge to the best possible straight line.
### Scan line analysis
The following steps are executed to fit straight line segments to the scan line data and to remove irregularly shaped objects:
1. Choose an unmarked position \(i\) at random among the available data in the array \(d\) holding the scan line data.
2. Check a sufficiently large interval around this position \(i\) for available data, resulting in a set \(S\) of 2D points.
3. Set the counter \(k\) to zero.
4. If \(S\) contains more than a specific number of points (e.g. at least six), continue. Otherwise mark the current position \(i\) as discarded and go to step 14.
5. Increase the counter \(k\) by one.
6. Perform a RANSAC-based straight line fitting with the 2D points in the specified set \(S\).
7. If RANSAC is not able to find an appropriate straight line or the number of inliers is low, mark the current position as discarded and go to step 14.
8. Obtain the line's Hessian normal form \((\mathbf{x}-\mathbf{p}_{i})\cdot\mathbf{n}_{\mathbf{\theta}}=0\) and push the current position \(i\) on an empty stack.
9. Pop the first element \(/\) off the stack.
10. If the counter \(k\) has reached a predefined maximum and the number of points in \(S\) is high enough, store the 2D normal vector information \(\mathbf{n}_{\mathbf{\theta}}\) at position \(j\) and mark that position as processed.
11. Check each position in an interval around \(j\) that has not already been looked at whether the respective point lies sufficiently near to the straight line. If so, push its position on the stack. Additionally, include the 2D point in a new set \(S^{\prime}\).
12. While the stack is not empty, go to step 9. Otherwise continue with step 13.
13. If the counter \(k\) has reached its maximum (e.g. two cycles), set it to zero and continue with step 14. Otherwise go to step 4 with the new set of points \(S^{\prime}\).
14. Go to step 1 until a certain number of iterations has been performed or no unmarked data is available in the scan line.
In each iteration step we randomly select a position in the array \(A\) of scan line data points and try to fit a straight line segment to the neighboring data at that position. The described RANSAC technique provides a robust estimation of the line segment's parameters, with automatic evaluation of the quality, e.g. by the number of outliers. If the fitted straight line is of poor quality, the data associated with the current position is assessed as clutter. Otherwise, we try to optimize the line fitting by looking for all data points that support the previously obtained line, which is done in steps (9), (10), (11) and (12). These steps actually represent a line growing algorithm. The local fitting of a straight line segment is repeated once with the supporting points to get a more accurate result. The end points of the resulting line segment can be found as the perpendicular feet of the two outermost inliers. These and the 2D normal direction are stored before the method is repeated until all points in the scan line are either assessed as clutter or part of a line segment. Figure 2 shows detected straight lines for one exemplary scan line, depicted with a suitable color-coding according to the normal direction. The median locations of all detected line segments are used as subset of the seed-points in the next scan line to speed up calculations, so the choice in step (1) is no longer \"random\" at this stage.
### Detection of the ground level
The next step is to identify all points in the scan line that form the ground level. A line segment at ground level can be characterized by the number of points lying beneath it in an appropriate neighborhood with respect to its 2D normal direction (that number should be near zero). In general, this is not a sufficient condition, but it yields enough candidates for an estimation of the dividing line between objects of interest and ground level. The dividing line is formed by estimates of the sensor-to-ground distance in \(y\)-direction at each position in the scan line. Each newly found line segment, potentially lying at ground level, contributes to that estimate in a neighborhood of its position with respect to its normal direction. Thus, this approach is permissive to unevenness of the terrain (Figure 3). Finally, all line segments lying completely below the dividing line are assessed as ground level, whereas line segments crossing or lying above are classified as part of a building. For increased robustness, the exponentially smoothed moving averages of the dividing line's parameters are perpetually transferred from the previously processed scan lines.
Figure 3: Automatically identified dividing line between buildings and ground level in a scan line.
Figure 2: Detected straight line segments in a typical scan line, color-coded according to 2D normal direction.
After most of the clutter and unwanted data points at ground level have been removed, the remaining line segments are likely to belong to planar parts of buildings, e.g. facades or rooftops. With our approach, only the end points of each detected line segment have to be georeferenced to result in correct positioned straight 3D lines. That reduces the amount of arithmetic operations, since only few points need to be converted with respect to the sensor orientation.
To give a comparison, Figure 4 (a) shows a rendered visualization of a point cloud of an urban area. Each range measurement has been georeferenced and every 3D point is depicted with its associated intensity resulting from off-line waveform analysis. The whole data set contains 1.300.000 points. Both full waveform analysis and converting of all points to the global 3D coordinate system are time-consuming. In contrast, Figure 4 (b) shows straight line segments after real-time capable scan line processing. Each line segment is depicted with a color-coding according to its 2D normal direction; the detected ground level is shown in yellow. Here the data set contains only 35.000 line segments classified as building and 15.000 line segments at ground level.
### Merging of line segments within a scan line
Detected straight line segments within a single scan line are often ambiguous and affected by gaps. To merge overlapping or adjacent pieces that are collinear, line-to-line distances have to be evaluated. Let _(P\({}_{i}\), P\({}_{j}\))_ and _(Q\({}_{i}\), Q\({}_{j}\))_ denote two different line segments detected within the same scan line (Figure 5).
\(P_{i}\), \(P_{j}\), \(Q_{i}\), and \(Q_{i}\) are the georeferenced end points in 3D space. Since every range measurement has a time-stamp, \(P_{i}\) and \(Q_{i}\) can be chosen to represent the first recorded end point in the respective line segment, \(P_{i}\) and \(Q_{i}\) are chosen accordingly. Two different distance measures are evaluated to decide whether the two line segments are to be merged or not. The first distance \(d_{i}\) indicates if _(P\({}_{i}\), P\({}_{j}\))_ and _(Q\({}_{i}\), Q\({}_{j}\))_ are overlapping. In that case it would be zero; otherwise it is set to the minimum Euclidean distance between end points of the two line segments. With the abbreviations \(\boldsymbol{v}_{i}=\boldsymbol{p}_{i}-\boldsymbol{q}_{i}\), \(\boldsymbol{v}_{2}=\boldsymbol{p}_{i}-\boldsymbol{q}_{i}\), \(\boldsymbol{v}_{3}=\boldsymbol{p}_{2}-\boldsymbol{q}_{i}\), and \(\boldsymbol{v}_{4}=\boldsymbol{p}_{2}-\boldsymbol{q}_{i}\), distance \(d_{i}\) is defined as
\[d_{i}\coloneqq\begin{cases}\min\left(\boldsymbol{v}_{i},\boldsymbol{v}_{i}, \boldsymbol{v}_{i},\boldsymbol{v}_{i},\boldsymbol{v}_{i},\boldsymbol{v}_{i}, \boldsymbol{v}_{i}\right)\text{ if }\boldsymbol{v}_{i}^{\top}\boldsymbol{v}_{j}\geq 0 \forall i<j\\ \boldsymbol{0}\text{ else }\end{cases} \tag{1}\]
With \(\boldsymbol{p}_{*}=\left(\boldsymbol{p}_{2}-\boldsymbol{p}_{i}\right)/ \|\ \boldsymbol{p}_{2}-\boldsymbol{p}_{i}\|\) and \(\boldsymbol{q}_{*}=\left(\boldsymbol{q}_{2}-\boldsymbol{q}_{i}\right)/\|\ \boldsymbol{q}_{2}- \boldsymbol{q}_{i}\|\), the parameters of the perpendicular feet of each end point with respect to the other line segment are given as
\[\begin{array}{ll}s_{1}=\left(\boldsymbol{q}_{1}-\boldsymbol{p}_{i}\right)^ {\top}\boldsymbol{p}_{*}&t_{1}=\left(\boldsymbol{p}_{i}-\boldsymbol{q}_{i} \right)^{\top}\boldsymbol{q}_{*}\\ s_{2}=\left(\boldsymbol{q}_{2}-\boldsymbol{p}_{i}\right)^{\top}\boldsymbol{p}_{ *}&t_{2}=\left(\boldsymbol{p}_{2}-\boldsymbol{q}_{i}\right)^{\top}\boldsymbol {q}_{*}\end{array}\]
The second distance \(d_{3}\) is a measure of collinearity. It describes the sum of all minimal Euclidean distances of end points to the other line segment. Using the above parameters \(s_{1}\), \(s_{2}\), \(t_{1}\) and \(t_{2}\), distance \(d_{3}\) can be expressed as
\[\begin{array}{ll}d_{2}\coloneqq&\left|\boldsymbol{p}_{i}+s_{i},\boldsymbol {p}_{*}-\boldsymbol{q}_{i}\right|+\left|\boldsymbol{p}_{i}+s_{i}\boldsymbol{p} _{*}-\boldsymbol{q}_{i}\right|\\ &+\left|\boldsymbol{q}_{1}+t\boldsymbol{q}_{*}-\boldsymbol{p}_{i}\right|+ \left|\boldsymbol{q}_{1}+t\boldsymbol{q}_{*}-\boldsymbol{p}_{i}\right|\end{array} \tag{2}\]
Let \(L\) denote the list of all detected line segments within the current scan line. The algorithm to find corresponding line segments works as follows:
1. Initialize the current labeling number \(m\) with 1. Select the next entry \(a\) in \(L\), starting with the first one.
2. If \(a\) is unlabeled, set its label to \(m\) and increase \(m\) by 1. Successively test each line segment \(b\) in \(L\) following after \(a\) if \(d_{i}\)_(\(a\),\(b\))_ and \(d_{j}\)_(\(a\),\(b\))_ are smaller than predefined thresholds. If so, go to step (5), otherwise continue with testing until \(b\) reaches the end of the list \(L\). In that case, go to step (6)
3. If \(b\) is unlabeled, set its label to the label of \(a\). Otherwise set the label of \(a\) and \(b\) to the minimum of both labels. Continue testing in (4). Continue with (2) until \(a\) reaches the end of the list \(L\).
4. Repeat the procedure until labels do not change anymore.
Roughly spoken, the above procedure first initializes each line segment detected in the scan line with a unique label. Those collinear line segments that are found to overlap or lie adjacent are linked together by labeling them with their minimum labeling number. This process is repeated until the labels reach a stable state. The emerging clusters of line segments with same label are then represented by one single line segment, given by the two outermost end points of that cluster.
Figure 4: (a) Laser data of an urban area scanned in 45° oblique view, (b) same urban region after scan line analysis.
Figure 5: Two line segments within the same scan line.
### Merging of line segments over several scan lines
In principle, merging of 3D line segments over multiple scan lines is performed similar to the methods described in the previous section. In contrast to merging of line segments within the same scan line, we are now interested in coplanarity instead of collinearity. Thus other distance measures have to be evaluated. Let \(P_{n}\), \(P_{n}\) and \(P_{k}\) be three of the four end points of two line segments. The distance of the fourth end point \(P_{n}\) to the plane defined by the three others is a sum of columnarity. We define distance \(d_{3}\) as the sum of all four possible combinations:
\[d_{3}=\sum_{a}\frac{\left(\boldsymbol{p}_{i}-\boldsymbol{p}_{n}\right)^{T} \left(\boldsymbol{p}_{i}-\boldsymbol{p}_{j}\right)\times\left(\boldsymbol{p} _{i}-\boldsymbol{p}_{i}\right)}{\left(\boldsymbol{p}_{i}-\boldsymbol{p}_{j} \right)\times\left(\boldsymbol{p}_{i}-\boldsymbol{p}_{i}\right)} \tag{3.3}\]
With the notations of section 3.4, \(d_{4}\) is simply defined as the minimum Euclidean distance between respective first and last end points and the centers of two line segments
\[d_{4}=\min\left(\left\|\boldsymbol{p}_{i}\right\|,\left\|\boldsymbol{p}_{i} \right\|,\left\|\boldsymbol{p}_{i}\right\|,\left\|\boldsymbol{p}_{i}+ \boldsymbol{r}_{i}\right\|\right) \tag{3.4}\]
Another distance measure can be expressed by the angle between direction vectors \(\boldsymbol{p}_{n}\) and \(\boldsymbol{q}_{n}\)
\[d_{3}\coloneqq\left|\arccos\left(\boldsymbol{p}_{i}^{\top}\boldsymbol{q}_{n} \right)\right| \tag{3.5}\]
Labeling of line segments over multiple scan lines works analogous to the labeling procedure within a single scan line. Nevertheless, there are some differences since scan lines are recorded successively. Moreover, simply testing of two line segments for coplanarity would allow the label to be handed over at edges of buildings. To avoid that, the 3D normal direction at each line segment has to be estimated.
* Let \(n\) be the number of scan lines to be traced back (e.g. five). \(L_{o}\) denotes the list of line segments in the current scan line, \(L_{i}\) is one scan line behind, \(L_{z}\) is two steps behind and so forth.
* First, test every line segment in \(L_{o}\) if it is near to other line segments in \(\{L_{I}\), , \(L_{a}\}\) in terms of distance measures \(d_{j}\), \(d_{a}\) and \(d_{b}\). If line segments \(a\) and \(b\) correspond this way, store this link in a database.
* After that, line segments in \(L_{u}\) will receive no further links. The 3D normal direction at each line segment in \(L_{u}\) is estimated by RANSAC based plane fitting to the set of associated other line segments. If that set contains too few points or the number of outliers is high, the respective line segment is of class \(2\). Those line segments are typically isolated or near to the edge of a building. All other line segments in \(L_{u}\) belong to class \(1\) and a 3D normal direction can be assigned to them.
* Initialize all line segments of \(L_{u}\) with new labeling numbers. Test every line segment in \(L_{u}\) if it is near to other line segments in \(\{L_{u+1}\), , \(L_{2a}\}\) in terms of distance measures \(d_{j}\), \(d_{d}\) and \(d_{j}\) (these links are already established).
* If line segments \(a\) and \(b\) are linked, the following cases may occur:
* \(a\) and \(b\) are of class _2_: do nothing
* only one line segment is of class _1_: the class \(2\) line segment receives the label of the class \(1\) element
* \(a\) and \(b\) are of class _1_: if the angle between the associated normal directions (calculated same as \(d_{j}\)) falls below a predefined threshold, set the label of \(a\) and \(b\) to the minimum of both labels
* Continue comparing \(L_{u}\) to \(\{L_{u+1}\), , \(L_{2a}\}\) until labels reach a stable state.
Just for clarification: the labels that are assigned to the scan lines in section 3.4 are independent from those introduced here. The first \(n\) scan lines are required for initialization before the algorithm starts working. At least \(2n\) scan lines are needed before the merging of line segments begins. Line segments that form a connected (mostly planar) surface will successively be marked with the same label until no more fitting line segments are recorded (Figure 6).
### Delineation of connected line segments
For data storage and preparation of subsequent analysis, it is convenient to delineate each completed cluster of connected line segments by a polygon (closed traverse). The 3D normal direction \(\boldsymbol{n}_{c}\) of a cluster \(C\) can be estimated as the weighted average of normal directions of all line segments (weighted by line length). It is easy to determine an affine transformation \(E\) that transforms \(\boldsymbol{n}_{c}\) to the \(z\)-axis, such that the points of _E(C)_ roughly lie in the x-y plane. The boundary is derived by determination of a 2D alpha shape ([PERSON] et al., 1983) with an alpha corresponding to scan line distance.
The basic idea behind an alpha shape is to start with the convex hull. Then a circle of radius \(a\) is rolled around that convex hull. Anywhere the alpha-circle can penetrate far enough to touch an internal point (a line segment) without crossing over a point at the boundary, the hull is refined by including that interior point. Then the alpha shape is transformed back by application of \(E^{J}\). Finally, we have a set of 3D shapes representing grouped line
Figure 6: Result of scan line analysis and line segment grouping (color corresponds to label).
Figure 7: Exemplary alpha shape of a non-convex cluster.
segments resulting from scan line analysis (Figure 7, Figure 8). Subsequent analysis depends on the problem at hand. For building reconstruction and model generation, the detected planar faces have to be intersected to find edges and corners. Examples for that procedure can be found in ([PERSON], 1999) or ([PERSON], 2005).
## 4 Discussion and Conclusion
The preceding sections have described a workflow of operations to detect the ground level and distinguish between clutter and man-made objects in airborne laser scanner data of urban regions. Analysis of ALS data is done by scan line analysis instead of the usual TIN based approach. With the proposed methods, surfaces can be segmented \"on-the-fly\", thus enabling ALS to be used for applications that require on-line data processing.
The results presented in this paper were obtained with programs that were developed under MATLAB(r). With that implementation and the data shown in Figure 4, scan line based filtering, line segment grouping and delineation of the detected surfaces take about 6 minutes on a standard PC, whereas the data recording took only 16 seconds. Nevertheless we feel confident that all computations can be accomplished in real-time if a more efficient implementation would be used. Computation time can get much shorter, since the proposed techniques have high potential for parallel processing. The inherent real-time capabilities of the used techniques are more crucial than computation time. Here it is undesirable to use an algorithm that requires the whole data set to be present before data processing can start. Our method for scan line grouping described in section 3.5 typically needs the data of only ten consecutive scan lines to be available in memory, which are recorded within 0.1 seconds. Scan line analysis is done by RANSAC iterations, combined with a line-growing algorithm. In principle, RANSAC is \"any-time\" capable, since its number of iterations can be adapted to the requirements. Speed is increased even more by using the locations of all detected line segments as subset of the seed-points in the next scan line. Our future work will focus on on-line change detection at urban environments. We will also use the proposed methods to provide an accurate co-registration of different (oblique) views of the same urban region.
## References
* algorithms and applications. ISPRS Journal of Photogrammetry and Remote Sensing, 54, pp. 138-147.
* [PERSON] et al. (1983) [PERSON], [PERSON], [PERSON], 1983. On the shape of a set of points in the plane. IEEE Transactions on Information Theory 29 (4), pp. 551-559.
* [PERSON] and [PERSON] (1981) [PERSON], [PERSON], 1981. Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM 24 (6), pp. 381-395.
* [PERSON] and [PERSON] (2005) [PERSON], [PERSON], 2005. Least squares 3D surface and curve matching. ISPRS Journal of Photogrammetry and Remote Sensing 59 (3), pp. 151-174.
* [PERSON] and [PERSON] (2003) [PERSON], [PERSON], 2003. Extraction of Road Geometry Parameters form Laser Scanning and existing Databases. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 34 (3/W13).
* [PERSON] and [PERSON] (1994) [PERSON], [PERSON], [PERSON], [PERSON], 1994. Fast Segmentation of Range Images into Planar Regions by Scan Line Grouping. Machine Vision and Applications 7 (2), pp. 115-122.
* [PERSON] and [PERSON] (1999) [PERSON], [PERSON], [PERSON], [PERSON], 1999. Edge detection in range images based on scan line approximation. Computer Vision and Image Understanding 73 (2), pp. 183-199.
* [PERSON] and [PERSON] (2004) [PERSON], [PERSON], 2004. Edge Detection in Range Images of Piled Box-like Objects. Proceedings of the 17 th International Conference on Pattern Recognition ICPR 2004 (2), pp. 80-84.
* [PERSON] et al. (2005) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2005. Automated Delineation of Roof Planes from LIDAR Data. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 36 (3/W19), pp. 221-226.
* [PERSON] and [PERSON] (2003) [PERSON], [PERSON] [PERSON], 2003. Automatic Structure Detection in a Point-Cloud of an Urban Landscape. Proceedings of the 2 nd Joint Workshop on Remote Sensing and Data Fusion over Urban Areas URBAN 2003, pp. 67-71.
* [PERSON] and [PERSON] (2005) [PERSON], [PERSON], [PERSON], 2005. Filtering of airborne laser scanner data based on segmented point clouds. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences 36 (3/W19), pp. 66-71.
* [PERSON] et al. (2004) [PERSON], [PERSON], [PERSON], 2004. Precise Vehicle Topology and Road Surface Modeling Derived from Airborne LiDAR Data, Proceedings of the ION 60 th Annual Meeting.
* [PERSON] (1999) [PERSON], 1999. Building Reconstruction using Planar Faces in Very High Density Height Data. International Archives of Photogrammetry and Remote Sensing 32 (3/2W5), pp. 87-92.
* [PERSON] and [PERSON] (2001) [PERSON], [PERSON], 2001. 3D Building Model Reconstruction from Point Clouds and Ground Plans. International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences 34 (3/W4), pp. 37-44.
* [PERSON] et al. (2004) [PERSON] [PERSON], [PERSON], [PERSON], [PERSON], 2004. Recognising structure in laser scanner point clouds. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences 46 (8), pp. 33-38.
* an Introduction and Overview, ISPRS Journal of Photogrammetry and Remote Sensing 54, pp. 68-82.
Figure 8: Shapes detected at an urban region.
|
isprs
|
Tensor-Based Sparse Representation Classification for Urban Airborne LiDAR Points
|
Nan Li, Norbert Pfeifer, Chun Liu
|
https://doi.org/10.3390/rs9121216
| 2,017
|
CC-BY
|
isprs/eb06872b_6769_4342_ac40_eeefbc64410b.md
|
# Natural Report of Finland for
Photogrammetry, Remote Sensing, GIS and Digital Mapping
1992-1995
[PERSON]
Finnish Society of Photogrammetry
and Remote Sensing
ISPRS Commission VI
###### Abstract
The national report of Finland outlines activities and developments in photogrammetry, remote sensing, GIS and digital mapping during the period 1992-1995. The sectors included in the report are of government, education, research and of private sector. There are also listed addresses and www-URLs of some organizations.
Finland, National Report.
Finland, National Report.
## 1 Introduction
Mapping in Finland is practised by governmental organizations, municipal surveying offices and private companies. The national organizations concentrate on the small-scale mapping covering the whole country. Large-scale maps are made by municipal surveying offices and private companies.
Research in the fields of photogrammetry, remote sensing, GIS and digital mapping is mainly done in the national organizations, institutes and universities.
Education of surveying at the university level is centred at the Helsinki University of Technology (HUT) at the department of Surveying. Fundamentals of photogrammetry and remote sensing are given also at some other universities. Education in photogrammetry, remote sensing, GIS and digital mapping lower than university level is given in the branches of surveying in the State Institutes of Technical Education and in the municipal Espoo-Vantaa Institute of Technology.
In the field of surveying, engineer is a new degree in Finland. Education in that level is given in the Espoo-Vantaa, Mikkeli, Rovaniemi and Vaasa Institute of Technology. The first surveying engineers will be graduated in year 1996.
## 2 Development of Photogrammetry
During the years 1992-1995 development has been aimed to digital photogrammetry, GPS-based flight navigation and GPS-based determination of projection centres of aerial images. In the field of non-topographic photogrammetry, development has occurred concerning real-time photogrammetric systems. They have been taken into productive use in different industrial applications.
Because private companies did not give any detailed information about their photogrammetric production, any exact numbers of taken photos or triangulation points or the mapping area are not listed here. Here are pointed out some facts that describe the development of photogrammetry, remote sensing, GIS and digital mapping here in Finland during the years 1992-1995.
Analogue aerial images are commonly used for topographic mapping and digital images have been used for aerial aerotriangulation in some cases. Black-and-white films are the most commonly used in topographic mapping especially concerning scales smaller than 1:15000. Colour and colour-infra films are used almost as often as black-and-white films when the scale is bigger than 1:15000. Colour-infra films are the most commonly used for forest interpretation and classification for forest taxation.
Important is the implementation of DGPS in flight navigation and aerotriangulation. Nowadays projection centres of aerial images are determined using DGPS almost for all projects. That has helped to reduce significantly the number of the ground control points. Nearly all triangulation during this period has been done by bundle block adjustment, just in a few cases triangulation was done by analogue model triangulation. The aim in triangulation is to use digital images and that is why there have been investigations how digital aerial image production should be arranged and executed.
Orthoinages have been produced for 1:5000 scale orthoimage maps and for some experimental maps. The aim is to make orthoimages automatically using digital photos.
The non-topographic activity has mostly been in the field of real-time photogrammetry. Real-time photogrammetric systems are used applications such as: robot guidance, road maintenance and deformation measurements.
## 3 Development of Remote Sensing
The remote sensing activities have increased during this period. In Finland, the development and applications of remote sensing are mostly concentrated on getting information from ice fields to use it to help winter shipping, for land-use classification, forestry and other environmental tasks.
One environmental task is to monitor pollution and the prevalence of alga in waters. Applications concerning forestryare e.g. forest inventory and monitoring changes in forest condition. A remarkable project in the field of remote sensing has been to check the areas of cultivated land in applications for EU-support. Ice field monitoring is concentrated to interpret the ice situation at the Gulf of Bothnia, the Gulf of Finland and the Baltic Sea. The data type used is mainly NOAA-images, another data used is ERS-1-data and the use of it has been increasing. The real-time system for transmitting satellite-date products to icebreakers is in operational use.
The Satellite Image Centre, a new national unit, was established in the National Land Survey in February 1995. The centre imports and distributes satellite images, it takes care of the initial processing of the remote sensing data, offers educational and technical services for the users. One task of the Satellite Image Centre is to take care of contacts with remote sensing organizations of other countries.
Research activities in the field of remote sensing are concentrated e.g. to develop more automatic methods for interpretation and classification of satellite images, rule-based methods, neural networks and fuzzy sets have been under investigation. The use of multi-source image data for interpretation and radiometric calibration of satellite images has been also intensively studied.
## 4 Development of GIS and Digital Mapping
During the past four years, the focus of mapping activities have been on the revision, the mapping at larger scales and the production of digital maps. Typical for this period has been the remarkably increased and still increasing amount of different numerical map-data. Today the latest edition of the Basic Map, scale 1:20000, is available on raster form as datasets of planimetric details, contours, waters and fields. Most of the datasets are also available in vector form. Other products available in digital form are:
* Nordic Map Database, scale 1:2 millions
* National Road Database
* Administrative Boundaries
* Digital Elevation Model
* Land Use and Forest Classification.
The Topographic Data System consists of the most detailed general topographic data with nation-wide coverage and the map databases. Data inside it covers 23 % of the area of Finland. The Topographic database is used as a basis for variety of standard products as well as products customised to user's needs.
The producer of naval charts in Finland is the Hydrographic Department of the Finnish Maritime Administration. It has published a chart series for yachtsmen in raster form on CD-ROM in spring 1995 and coastal charts covering the eastern part of the Gulf of Finland. Nautical charts have made in numerical form covering southern part of lake Sairana and waters between cities of Savonlinna and Kiopiop. During the digitisation process, the precision of charts has been improved using e.g. a new digital coastline.
Other organizations besides the National Land Survey, which is responsible for small scale mapping covering the whole country, there are private surveying companies and municipal surveying organizations producing digital maps. They produce normally large-scale maps for purposes like land use planning and road building. Other produced large-scale data are digital elevation models and digital terrain models. GIS has increased the need and production of digital map-data in cities and other municipalities.
The Geographic Information Centre at the National Land Survey is responsible for stimulating geographic information activities in Finland. To implement geographic information services and to develop standards and tools are its main tasks. The real-time information service on geographical data is already available. User can define queries by pointing entities, attributes and areas and by typing restricting values of attributes using some GIS application, for example MapInfo, ArcView2 etc. Service centre routes modified queries to the supplier where the database containing the needed data exists. The aim of the Geographic Information Centre is to implement the data services for geographic data covering all important national datasets by the year 1996.
## 5 Education and Research
### Education
Education in surveying at the university level is centred to the Helsinki University of Technology (HUT) in the department of Surveying, Annual intake is 55 students of which 22 will study surveying and mapping technology and 33 property economics and law. The average amount of graduated on M.Sc. level is about 40 persons each year. There are different entrance examinations for these two study directions. The system of two different entrance examinations was taken in use in year 1993. University level education in fundamentals of photogrammetry and remote sensing and special courses in the determination of forms and deformations, is also given at the Tampere University of Technology (TUT).
Fundamentals of remote sensing are also taught in the department of geography and forestry at the University of Oulu, in the departments of geography and biology. Education in the field of remote sensing is given as well at the University of Turku and at the University of Joensuu in the department of forestry.
One remarkable change in education during these for years has been that in Espoo-Vantua, Mikkeli, Rovaniemi and Vaasa Institutes of Technology are now educating engineers instead of technicians. The first surveying engineers in Finland will graduate in 1996.
During the period in question thirteen M.Sc. theses, two licentiate degree theses and one doctorate thesis in photogrammetry or remote sensing have been accepted. The dissertation was:
[PERSON] [PERSON]: \"On system development of photogrammetric stations for on-line manufacturing control\".
The licentiate's theses were:
[PERSON] [PERSON]: \"Production and Use of Digital Imagery in the GIS Environment\".
[PERSON]: \"Relative Orientation of Multiple Images Using Projective Singular Correlation\".
The post-graduate and supplementary education have been offered at the HUT, at the TUT and at the University of Turku. In 1992 post-graduate seminars were held on \"Statistical pattern recognition\" and \"Satellite image interpretation\" at the HUT and the University of Turku respectively. Seminars in the year 1993 were held at the HUT and at the TUT under following topics: \"National researcher seminar of surveying and mapping technology\" and \"The quality of mapping measurements\". In 1994 a post-graduate seminar concerning remote sensing was held at the University of Joensuu. In 1995 there was not any specific supplementary education in the field of photogrammetry and remote sensing at the university level.
### Research activities
Following is a short overview of research done in Finnish organizations.
Hkkinki University of Technology (HUT)
Institute of Photogrammetry and Remote Sensing
Institute of Space Technology
Image processing and dynamic modelling for 3D machine vision systems
The use of projective transformations in reconstructing photogrammetric models
Development of signal based image matching methods II
Digital image processing in remote sensing
Solid modelling
Feature based photogrammetric reconstruction of 3D space
3D video digitising
Sensor fusion and measuring model in 3D vision
Advanced computerisation of the building information system
3D human body
Airborne profilometer
Study for the definition of airborne microwave radiometer
facitity
Imaging microwave radiometer
ESA European multisensor airborne campaign
Retrieval of snow and sea ice characteristics from microwave radiometer data in the range 6-90 GHz
Remote sensing of forests, snow and sea ice
Radiometer and scatterometer land applications
Tampere University of Technology (TUT)
Department of Civil Engineering
Production system of digital orthoinages
Stillvideo-based system for facade measurement
The accuracy and quality control of photogrammetric measurements
University of Helsinki (UH)
Department of Forest Mensuration and Management
The use of digital aerial images for mapping of forest biotopes
An expert system for updating forest resource information in the frame of multi-source information
Use of neural networks for information collection for forestry
University of Jocnsuu (UJ)
Department of Forestry
Department of Physics
A method for monitoring changes in forest condition in Finland
Temporal, spatial and environmental classification of pine reflectance spectra
University of Oulu (LO)
Department of Geography
Land use mapping with Landsat 5 TM imagery
Finish Geodetic Institute (FGI)
Department of Photogrammetry and Remote Sensing
Department of Cartography and Geoinformatics
Estimation of crop yields by using satellite images
Effects of the atmosphere to the quality of remote sensing images
Aerotriangulation using digital images
The use of movable test images for valuation of the quality of aerial imaging
Snow field mapping using satellite images
Radiometric calibration of satellite images
GPS in aerotriangulation (jointly with NLS)
Digital orthoinages
Rule-based interpretation methods for satellite images
Optimisation of digitising of images for photogrammetric purposes
Study of JPEG-compression for scanned aerial photographs
Einish Institute of Marine Research (UMR)
Real-time ice monitoring
ERS-1 images for ice monitoring (jointly with HUT and TRC)
Geological Survey of Finland (GSF)
Integration of image processing with remote sensing and geological data for geological purposes
Data correction and classification methods for ground and bed-rock investigation
Finish Environment Institute
GIS and Remote Sensing Unit
Software development for processing NOAA/AVHRR-images
Remote sensing data for the use of agriculture
Remote sensing for investigation of biodiversity
Investigation and modelling evaporation on fields and forests
Technical Research Centre of Finland (TRC)
Space Technology
Parallel processing techniques applied to the mosaicking of large numbers of airborne video images (GLORE-project)
Development of high quality methods for estimation of biomass from high-resolution data like TM and SPOT
Development of methods for agricultural crop growth monitoring for Finnish conditions using satellite images with other data
A forest fire alarm system using NOAA images
SAR interferometry for terrain elevation model generation and for detection of land surface movements
ERS-1 data for estimation of forest damages An automatic control point measurement system for satellite image rectification using numerical map-data
National Land Survey
Geographic Data Centre
P. O. Box 84 (Opastioslaista 12 C)
Fin-00521 Helsinki
URL: [[http://www.ns.fi/](http://www.ns.fi/)]([http://www.ns.fi/](http://www.ns.fi/))
National Land Survey
Satellite Image Centre
P. O. Box 84
Fin-00521 Helsinki
URL: [[http://www.ns.fi/](http://www.ns.fi/)]([http://www.ns.fi/](http://www.ns.fi/))
Oy Maynision Ltd
P. O. Box 8
Fin-02941 Espoo
URL: [[http://home.kolumbus.fi/~leikas/](http://home.kolumbus.fi/~leikas/)]([http://home.kolumbus.fi/~leikas/](http://home.kolumbus.fi/~leikas/))
STIO Qy
Poljantie 12 A
Fin-02100 Espoo
Soil and Water Ltd
Italindenkaut 2
Fin-00210 Helsinki
Topographic Service of Finnish Defence Forces
P. O. Box 60
Fin-00521 Helsinki
### The Finnish Society of Photogrammetry and Remote Sensing
The Finnish Society of Photogrammetry and Remote Sensing is devoted to the research and development of photogrammetry and remote sensing in Finland giving c.g. recommendations for aerial photogrammetry. The Society has about 230 members including about 20 companies.
The most notable part of the work of the Society is to publish The Photogrammetric Journal of Finland. The Society is member of the International Society for Photogrammetry and Remote Sensing.
Chairperson of The Finnish Society of Photogrammetry and Remote Sensing is Mr. [PERSON] during the period 1996-1998, co-chairperson is Mr. [PERSON] beginning at the year 1995 and secretary is Lic. Tech. [PERSON] to the end of the year 1996. Previous chairperson was Ms. [PERSON] during the years 1993-1995 and co-chairperson was [PERSON] in 1993-1994.
The Society has published Recommendations for Aerial Photogrammetry in Finland (issue 1/1993) and Instructions for Precise Photogrammetric Mapping (issue 1/1995), both in Finnish.
## 7 Activities in International Organizations
### International Society for Photogrammetry and Remote Sensing (ISPRS)
Prof. Dr. [PERSON] has been the co-chairman of ISPRS Commission III Working group 4 \"Knowledge Based Systems\". Mrs. [PERSON] is the honorary member of the ISPRS.
Contact persons of the ISPRS are:
* Commission I [PERSON]
* Commission II [PERSON]
* Commission III [PERSON]
* Commission IV [PERSON]
* Commission V [PERSON]
* Commission VI [PERSON]
* Commission VII [PERSON]
### Organization Europeenne d'Etudes Photogrammetriques Experimentales (OEEPE)
The representatives of Finland in the executive committee are Mrs. [PERSON] and Prof. Dr. [PERSON]. The representatives of Finland in Scientific Commission:
* A [PERSON]
* B [PERSON]
* C [PERSON]
* D [PERSON] and [PERSON]
* E [PERSON]
* F [PERSON].
The representatives of Application Commission are:
* I [PERSON] and [PERSON]
* II [PERSON]
* III [PERSON]
* IV [PERSON].
|
isprs
|
Image-based digital mapping
|
B. Makarovic
|
https://doi.org/10.1016/0924-2716(95)98212-i
| 1,995
|
CC-BY
|
isprs/cf51c4c6_df85_4f15_8619_1952b5981336.md
|
# Research on Demonstrate Transportation Development with Heat Map
###### Abstract
Heat map is an intuitive and accurate visualization tool for spatial data, which is wildly used in many fields. Based on analysis and setting the various layer weights and technical levels of traffic lines, we can use heat map to demonstrate the distribution state and development level of transportation in certain region, with using the comprehensively methods of inverse distance weight, histogram equalization, density compensation and repetitive parameter iteration. The heat map rule describes the rules of traffic line weighting, line-to-poly modification, dot density adjustment and liner fitting, et al, which can extract and demonstrate transportation development index accurately. Based on traffic layer of global geographic data in some Asian and African countries, heat map of transportation development index test has been done to verify the feasibility and reliability.
[PERSON] 1, [PERSON] 2 *, [PERSON] 3, [PERSON] 1, [PERSON] 1,
1 National Geomatics Center of China, No.28, Lianhuachi West Road, Haidian District, Beijing, China - [EMAIL_ADDRESS]
2 Bureau of Natural Resources and Planning of Mengyin County, No.198, Mengshan Road, Mengyin County, Shandong, China - liguiyu5186126.com
3 Geo-Compass Information Technology Co, LTD, 10 Floor C Building, Qingdong Business District, Haidian District, Beijing, China - [EMAIL_ADDRESS]
H. Map, Kernel Density Analysis, Traffic Weighting, Transportation Development Index, Iterate Optimization.
## 1 Introduction
The term \"heat map\" was originally proposed and created by software designer [PERSON] in 1991, use a 2D picture to describe and display real-time financial market information. The initial heat map are some rectangular colour blocks with colour coding, after years of evolution, the heat map on idioms is understood by most people as a smooth and fuzzy thermal map. In the big data application environment based on geographic information, a heat map is generally a thermal map that is visualized through a density function and used to represent the density of points in the geographic map, reflects the difference of observation and measurement in a large spatial range ([PERSON] et al., 2012).
With the great increase in the number of high-resolution satellites in orbit, the ability of geographic data acquisition, processing and database building has been continuously improved, global geographic information data is gradually improving and enriching. On the basis of existing achievements, it is particularly important for government decision-making and public services to realize statistical analysis based on geographic information data and carry out in-depth mining based on existing global geographic data ([PERSON] et al., 2014). The existing achievements of global geographic information resources mainly include digital orthopho map, digital surface model, digital line graphic, place names data, land cover and other data. In terms of information extraction and data mining, domestic research of China mostly focused on traditional remote sensing images and mapping analysis methods such as image target recognition, regional statistics and cluster analysis, the exploration on how to combine the essential characteristics and spatial distribution contained in geographic information with the intuitive feelings of the public is slightly insufficient ([PERSON] et al., 2019). For the digital line graphic data, its spatial features, element attributes and expression methods contain a lot of information closely related to human activities, which can be used as an effective data source to reflect the level of economic development and evaluate social development.
At present, the statistical index system of transportation industry lacks a \"comprehensive index\" that can characterize the operation status of transportation and reflect the development trend of transportation ([PERSON] et al., 2019). Due to the lack of \"comprehensive index\" to evaluate the development level of the transportation, the government and transportation authorities can not accurately find the weak links and prominent problems of the industry, resulting in a certain blindness in the planning process, which not only wastes resources, but also fails to achieve the expected effect.
Based on the traffic network in the digital line graphic data, this paper attempts to establish the calculation rules of transportation thermal distribution (\"comprehensive index\"), reflects the development level of transportation in some countries in the world with heat map in an intuitive way.
## 2 Heat Map Construction on Traffic Network
Traditional thermal calculation is based on points with spatial coordinates, through its attribute value, weight and influence range, the stack density and other information of each grid point in the drawing can be calculated, which can accurately and intuitively identify the spatial distribution of concernedinformation. Thermal calculation based on discrete points has been fully applied in geographic information data at all levels. However, in the thermal calculation of transportation development level based on traffic network, the following three problems need to be solved:
1. traffic lines are **linear** features and cannot be directly involved in scatter space calculation
2. The influence of traffic **grade** on thermal calculation should be much greater than its own spatial distribution density
3. The **search radius** and output **grid size** are closely related to the surface range and the reliability of the results
Some scholars have also conducted in-depth research and exploration on the buffering algorithm of linear features ([PERSON] et al., 2009), at the same time ESRI company also provides corresponding nuclear density analysis tools (ArcGIS desktop tools, 2021), which can be used as a good reference. This paper intends to establish the technical process of thermal calculation of transportation development based on the global traffic network, build the relevant rule system, and realize the automatic processing and visual display of transportation development level information.
The thermal calculation technical process of transportation development mainly includes three parts: traffic weighting, line to surface, optimization and coordination. It is shown in the figure below:
Where the grey dotted line frame shows the corresponding processing rules in different process stages, mainly includes dozens of rule items such as weight assignment of route type (railway / highway / subway, etc.), weight assignment of traffic technical grade, automatic integration of segmented traffic lines, calculation of inverse distance weight from line to surface, coordination of distribution/stack density, comprehensive adjustment of the whole map, etc.
## 3 Key Roles of Calculation
The key point of thermal calculation is to establish the corresponding relationship between the transportation development level in unit area and the grid thermal value. The traffic type, grade, density, fragmentation, buffer radius, output grid size and other factors are related to the calculation results of transportation development level. The correlation is a comprehensive influence and not a simple linear superposition correspondence. The reasonable formulation of thermal calculation rules can effectively avoid the excessive influence of a single factor on the judgment of transportation development, comprehensively consider the influence of various factors on thermal value, and improve the overall calculation efficiency and accuracy. The following focuses on the relevant rules of traffic weighting, line to surface and optimization coordination.
### Traffic Weighting
The traffic network data in most global vector feature data is basically organized and stored in the hierarchy of \"data layer - feature - attribute item\". In order to comprehensively consider its influencing factors, it is necessary to establish a hierarchical weighting rules of data layer, element and attribute. By setting weights separately and training samples, we can get reasonable weight setting rules. Traffic weighting rules mainly include:
#### 3.1.1 Traffic type weighting:
For different types of traffic lines, such as railway, highway, subway and light rail, set the corresponding development degree influence weight. The initial weight seed value is set as railway, road, subway and tram = 5:3:4:4, and the weight of each layer is _WLayer_(for most traffic types are organized by data layers).
#### 3.1.2 Traffic grade weighing:
Then sub-classifying with different technical grade: railway can be divided into single-line railway and double-line railway; road can be divided into montarous, trunk, primary, secondary, tertiary, minor, very small path. Different grade of subway and tram usually play same role, so we can use weight with them. Finally, the seed weights of double-line railway and single-line-railway are set to 8:5, seed weights of motorway, trunk, primary, secondary, tertiary, minor and very-small are set to 18:12:6:4:2:2:1. The weight of each traffic grade is _VGrade_.
### Line to Surface
According to the relationship between the development level of the transportation and the position and score of the traffic line, the score is the largest at the position where the traffic line is located, and gradually decreases with the increase of the distance from the traffic line. The score is zero at the position where the distance from the line is equal to the specified search radius.
The transportation development index is inverse distance weighted interpolation of traffic line. Based on [PERSON]'s
Figure 1: Thermal calculation process
quartic kernel function, we defined the kernel function for line features. The figure of line segment and the kernel surface fitted over it shows below:
The figure above shows the road line and the core surface covered above it. The space volume enclosed by the kernel surface (traffic score surface) and the lower plane is equal to the product of traffic line length and traffic score.
Considering the traffic line weights that have defined above, suppose that an output cell can meet n types of traffic lines and m grades of each type in searching distance R. The distance of cell and traffic line is _disty_, the component value of cell is:
\[\forall_{point}=\sum_{i=1}^{n}\left\{WLayer_{i}\frac{1}{\pi^{2}}\sum_{j=1}^{n} \left[\frac{1}{\pi}V Grade_{ij}\left(1-\left(\frac{\#ititititititititititititititititititititit
Figure 5: Transportation development index map in Bhutan
Figure 6: Traffic lines classification in Iran
Figure 7: Transportation development index map in Iran
Figure 10: More lines lead to too high value
Figure 8: Traffic lines classification in Morocco
Figure 9: Transportation development index map in Morocco
part is basically the same as the right part one, but the thermal values are almost twice the value of right part one. When too many low-grade traffic lines' thermal values are accumulated at one point, significant reduction should be done to reduce unreasonable cumulative effect. Generally, the arctangent function is a very useful tool to deal with the above problems. It is assumed that the thermal value of low-grade traffic line's value is \(X_{i}\), the multiple impacts value is \(Y\), the following arctangent function can be used:
\[Y=\frac{2K}{\pi}*\arctan\left(\sum_{i=1}^{n}\frac{X_{i}}{\alpha}\right)\ \ (3)\]
Where \(K\) is the max thermal value of multi impacts \(a\)is adjustment coefficient to prevent excessive growth(commonly set to 3 to 8)
Commonly K is set to 0.5-0.8 times of its upper level-grade multi impact thermal value, to minimize the incorrect high value caused by too many low-grade traffic lines.
#### 4.2.2 **Output cell size influence:**
Compared with road, railway subway and tram are the important form of transportation for cities, especially in big cities. It has the characteristics of short route (compared with cross city highway), large transportation volume and high construction technology level, which can obviously reflect the level of regional transportation development. If the evaluation is conducted within the urban area level, its influence performance is more accurate. However, its influence is obviously underestimated at the national or even transnational level. In order to avoid this situation, it is necessary to establish the fitting relationship between its weight and the output grid size. Obviously, the weight value needs to increase with the increase of the grid size, but there needs to be a limit, that is, it does not exceed twice the weight of the railway. The empirical formula is as follows:
\[W_{adjust}=A*(\frac{R_{cur}}{R_{max}-R_{min}})W\ \ (4)\]
Where A is the adjustment coefficient, commonly equals to 2 \(R_{cur}\) is current output cell size, \(R_{max}\) is max output cell size, \(R_{min}\) is minimum output cell size. All of the unit is meter \(W_{adjust}\) is the adjusted weight of railway subway and tram, W is the initial weight of them
The above images showed the different heat map value in Tehran of Iran with different output cell size. The left image's cell size is 9 kilometres and the right image is 5 kilometers, the blue lines is railway and other subway or tram. We can see that with the increase of output cell size, the impact of railway (or subway and tram) on transportation development will increase accordingly, which objectively reflects its influence on the macro scale.
## 5 Conclusion
This article described the research on illustrate traffic transportation development by a new style of heat map, from traffic network data, we can weigh the main type and traffic grade of traffic lines and then transmit the value of traffic line to scattered cell value; normalize the multiple scores contained in the rules system and convert the basic data into dimensionless standard values. After repeated iteration and parameter optimization, and also stack density adjustment and output cell size selection, we can get a grid map that basically reflects the development degree of transportation in a specific region.
In the research process, the biggest problem is how to accurately and objectively reflect the actual level of road transportation development on various scales. On the macro scale (such as national or international level), it should accurately reflect the comparative value of different levels of cities, transportation hubs, key routes and other type of transportation regions; at the micro scale (such as city level or county level), it should be able to correctly express the influence difference of urban roads, subway and light rail on the degree of transportation development, so as to be used as the basis of spatial geographical analysis or support urban planning.
Transportation development degree is a comprehensive index. Although the traffic line is the most important factor, stations, docks, cargo distribution centres, aircraft routes and other traffic affiliations are also important. To improve the research on transportation development index from traffic network data, the above factors must be fully incorporated and focused on to deepening and expanding this content.
## Reference
ArcGIS desktop tools, 2021, How Kernel Density Works. [[https://desktop.arcgis.com/en/arcmap/latest/tools/spatial-analyst-toolbox/how-kernel-density-works.htm](https://desktop.arcgis.com/en/arcmap/latest/tools/spatial-analyst-toolbox/how-kernel-density-works.htm)]([https://desktop.arcgis.com/en/arcmap/latest/tools/spatial-analyst-toolbox/how-kernel-density-works.htm](https://desktop.arcgis.com/en/arcmap/latest/tools/spatial-analyst-toolbox/how-kernel-density-works.htm)).
[PERSON], [PERSON], [PERSON], 2009. A commentary of progress in buffer area creation. _Science of Surveying and Mapping_, 34(5):67-69.
[PERSON], [PERSON], [PERSON], [PERSON], 2014. Progress in location big data analysing and processing. _Geomatics and Information Science of Wuhan University_, 39(4):379-385.
[PERSON], [PERSON], [PERSON], 2019. Visualized analysis framework on big geo-information data oriented spatio-temporal events. _Bulletin of surveying and mapping_, (12):101-104.
[PERSON], [PERSON], [PERSON], et al. 2003, Monthly Output Index for The US Transportation Sector. _Journal of Transportation and Statistics_, (2):72-76.
[PERSON], 2011, Research on the Compilation of China Transportation Services Index and Its Application, _Journal of Statistics and Information_, 27(04), 72-76.
Figure 11: Different cell size of heat map in Tehran The above images showed the different heat map value in Tehran of Iran with different output cell size. The left image’s cell size is 9 kilometres and the right image is 5 kilometers, the blue lines are 100 meters.
* [PERSON] et al. (2019) [PERSON], [PERSON], 2019. Research on construction of comprehensive transportation development index, _Transport Research_, 5(01):8-15.
* [PERSON] et al. (2017) [PERSON], [PERSON], 2017, Study of cross-border logistics collaboration between China and Southeast Asian countries along the Belt and Road-Based on LPI. _Journal of Chang'an University(Social Science Edition)_, 19(4):56-63.
* [PERSON] et al. (2012) [PERSON], [PERSON], [PERSON], 2012. Spatial distribution chasing calculation method of geo-object based on heat map. _Bulletin of surveying and mapping_, (S1):391-393.
* [PERSON] et al. (2013) [PERSON], [PERSON], [PERSON], 2013, Construction of transportation service index. _Statistics & Decision_, (06)8-11.
* [PERSON] (2019) [PERSON], 2019. Thought on How to Participate in the Belt and Road Initiative in Surveying, Mapping and Geo-Information Field. _Geomatics & spatial information technology_, 42(07):77-79.
|
isprs
|
RESEARCH ON DEMONSTRATE TRANSPORTATION DEVELOPMENT WITH HEAT MAP
|
X. Du, G. Li, G. Han, Q. Zhou, S. Lin
|
https://doi.org/10.5194/isprs-archives-xliii-b4-2022-99-2022
| 2,022
|
CC-BY
|
isprs/f1c56922_e84e_4f14_a77b_f598b9f31af4.md
|
# Accuracy Assessment and Calibration of Low-Cost Autonomous Lidar Sensors
[PERSON]\({}^{1}\)
[PERSON]\({}^{1}\)
\({}^{1}\) University of Houston, Civil & Environmental Engineering,
5000 Gulf Freeway Houston, TX USA - (clglennie or pjhartzell)@uh.edu
###### Abstract
A number of low-cost, small form factor, high resolution lidar sensors have recently been commercialized in an effort to fill the growing needs for lidar sensors on autonomous vehicles. These lidar sensors often report performance as range precision and angular accuracy, which are insufficient to characterize the overall quality of the point clouds returned by these sensors. Herein, a detailed geometric accuracy analysis of two representative autonomous sensors, the Ouster OSI-64 and the Livox Mid-40, is presented. The scanners were analyzed through a rigorous least squares adjustment of data from the two sensors using planar surface constraints. The analysis attempts to elucidate the overall point cloud accuracy and presence of systematic errors for the sensors over medium (\(<\) 40 m) ranges. The Livox Mid-40 sensor performance appears to be in conformance with the product specifications, with a ranging accuracy of approximately 2 cm. No significant systematic geometric errors were found in the acquired Mid-40 point clouds. The Ouster OSI-64 did not perform to the manufacturer specifications, with a ranging accuracy of 5.6 cm, which is nearly twice that stated by the manufacturer. Several of the individual lasers within the OSI-64's bank of 64 lasers exhibited higher range noise than their counterparts, and examination of the residuals indicate a possible systematic error correlated with the horizontal encoder angle. This suggests that the Ouster laser may benefit from additional geometric calibration. Finally, both sensors suffered from an inability to accurately resolve edges and smaller features such as posts due to their large laser beam divergences.
l +
Footnote †: This contribution has been peer-reviewed.
## 1 Introduction
There has been an explosion of small form factor, low-cost lidar units commercially available over the past several years. This growth has primarily been a result of the autonomous vehicle market and the need for small and cheap sensors suitable for providing real-time 3D situational awareness. A variety of these low-cost laser scanners have been integrated into unmanned aerial vehicles (UAV), indoor mapping platforms, and autonomous vehicle designs as the primary mapping device for providing obstacle detection and avoidance, e.g., ([PERSON] et al., 2017) and ([PERSON] et al., 2016). Beyond situational awareness, these devices are also being routinely employed as primary data acquisition sensors for high resolution surveying and mapping ([PERSON] et al., 2019; [PERSON] et al., 2017).
However, to date, for a majority of the sensors a systematic evaluation of their accuracy, repeatability and stability has not been presented. Most examination of accuracy for mapping products using these sensors have relied upon spot checks using GNSS check points, or static tests of ranging accuracy versus an external reference, e.g. ([PERSON] et al., 2019). While important for understanding overall mapping precision, it does not provide any understanding of the raw accuracy of the sensor observations, and the possibilities for improving this accuracy should systematic errors be present in the resultant point cloud measurements. A detailed analysis of the sensors in a well-controlled environment is required to determine base observational noise levels and the possible presence of systematic errors in the resultant 3D point cloud. This analysis is fundamental to understanding the capabilities of these sensors for 3D modelling and mapping as well as autonomous vehicle navigation applications. To our knowledge, currently, this type of detailed analysis has only been performed for Velodyne sensors, for example ([PERSON] et al., 2010; [PERSON] et al., 2016).
While an evaluation of all low-cost lidar units currently being employed in 3D surveying and mapping is required, such an exhaustive examination is beyond current resources. Therefore, we have chosen to demonstrate an evaluation methodology using two representative sensors, the Livox Mid-40, and the Ouster OS1-64, with the hope that this framework will provide a basis for analysis and comparison of additional low cost lidar sensors.
Herein, a detailed analysis of the OS1-64 and Livox Mid-40 laser scanners is presented. A preliminary evaluation of the Mid-40, primarily focusing on ranging accuracy is presented in ([PERSON] et al., 2019). Previous work on similar autonomous scanners (i.e. Velodyne HDL-64E, HDL-32E and VLP16) have shown that the factory calibration of the instruments was not optimized, that the instruments exhibited temporal instability for their calibration values, and also required a significant warm-up period to reach steady-state ([PERSON] et al., 2013; [PERSON] et al., 2010). With this prior experience in mind, each of the scanners was examined with the following goals: (1) characterization of precision with respect to range, angle of incidence and reflectivity of target surface, and, (2) presence of systematic errors in resultant point clouds. For the analysis we collected several static datasets from varying locations and orientation from both scanners in a scene with multiple hard target planar surfaces. The entire control scene was also scanned at high resolution with a survey grade terrestrial laser scanning system (Riegel VZ-2000) to provide an independent reference. Attempts to identify residual systematic errors using the least squares adjustment results constrained to planar surfaces similar to that reported in ([PERSON], [PERSON], 2006) are also presented.
## 2 Methods and techniques
### Mathematical Formulation
Scanners built for operating on autonomous vehicle platforms are often difficult to analyze in a static environment because they rely on vehicle motion to build up a high-resolution 3D model of their surroundings. In static mode, their fixed laser positions and field of views make it difficult to calibrate in a traditional sense using signalized targets (see for example ([PERSON], 2007)) because their static sampling density to too coarse to precisely determine target locations. Therefore, an approach using geometric primitives as targets is implemented. Herein, we use planar surfaces as solution constraints, similar to that detailed in ([PERSON], [PERSON], 2010, [PERSON], 2012). The model used for constraining a lidar point to a planar surface is given as:
\[\langle\vec{\mathbf{g}}_{k},\begin{bmatrix}\vec{\mathbf{t}}\\ 1\end{bmatrix}\rangle=0 \tag{1}\]
where \(\vec{\mathbf{g}}=\langle g_{1},g_{2},g_{3},g_{4}\rangle\) are the parameters of the kth planar surface \(\vec{\mathbf{r}}\) is the 3D lidar point in a global coordinate frame.
For a static analysis and calibration, the raw laser scanner data are normally collected from a number of different locations and/or orientations in order to collect data from differing view geometry. Therefore, any point, \(i\), collected from any of the scanner setups, \(j\), must be converted to a global coordinate frame via a rigid body transformation given as:
\[\vec{\mathbf{r_{i}}}=\mathbf{R}(\omega,\phi,\kappa)_{j}\vec{\mathbf{t}}_{ij}+\vec{\mathbf{t}} _{j} \tag{2}\]
where \(\mathbf{R}(\omega,\phi,\kappa)_{j}\) is the rotation matrix from scanner frame \(j\) to a global coordinate frame \(\vec{\mathbf{t}}_{j}\) is the translation vector between scanner frame \(j\) and a global coordinate frame \(\vec{\mathbf{t}}_{ij}\) is the 3D lidar point \(i\) in scanner frame \(j\).
The functional model described by the above equations is solved using a standard Gauss-Helmert adjustment model. A detailed discussion of this adjustment model is given in ([PERSON], [PERSON], 2006), and is therefore not repeated here. The solution to the model can either be accomplished by treating the plane parameters as unknown and solving for them simultaneously with the rotation matrix and translation vector in the adjustment, or by treating them as known values from an external reference. For our purposes, the latter case is chosen, with the planar reference surfaces provided by the point cloud from a survey grade terrestrial laser scan collected simultaneously with the tested autonomous scanners.
### Instruments
#### 2.2.1 Livox
The Livox Mid-40 sensor (see Figure 1) has a 38.4\({}^{\circ}\) field of view, and employs a unique non-repetitive rosette scanning pattern that increases data density in a fixed direction over time as demonstrated in Figure 2. Detailed specifications for the Mid-40 sensor are given in Table 1. The sensor weights less than a kilogram, has a volume of less than 10 cm\({}^{3}\) and is IP67 rated at a price point of S599 USD.
#### 2.2.2 Ouster
The Ouster OS1-64 has a 360\({}^{\circ}\) by 33.2\({}^{\circ}\) (\(\pm\) 16.6\({}^{\circ}\)) field of view with 64 individual laser beams, and rotates at a rate of either 10 or 20 Hz. The OS1-64 acquires data in a similar configuration to the well known Velodyne laser scanners. The sensor weights less than 0.5 kilograms, and has a 8.5 cm diameter and 7.5 cm height with an IP68 rating. The OS1-64 also includes an integrated 3 axis gyro and accelerometer package, the InvenSense ICM-20948. Detailed geometric specifications of the OSI-64 are given in Table 2 and an image of the scanner is given in Figure 3. The price of the OS1-64 is listed as 512,000 USD.
### Datasets
The data capture requirements for a rigorous geometric analysis of the scanners using the planar methodology, described in Section 2.1, is a collection area with a number of planar surfaces at a variety of distances and orientations. An ideal location, located in the University of Houston student center and shown in Figures 4 and 5, was used for the analysis herein. The entire area was scanned at high resolution (\(\sim\)1.0 cm point spacing), using a Riegl VZ-2000 scanner. The VZ-2000 has a specified ranging accuracy of 5 mm and angular resolution of 0.0007\({}^{\circ}\), combined with a small beam divergence of 0.27 mrad (0.015\({}^{\circ}\)), and therefore provides an accurate external reference for characterization of the Livox and Ouster scanners. In order to acquire a highly redundant set of observations, both the Mid-40 and OSI-64 were used to acquire a number of individual scans. The scanners were mounted on a pan and tilt tripod, and set up at three different locations surrounding the calibration site - approximate locations shown as yellow numbers on Figure 4. At each of the instrument set up locations, the pan and tilt mount was used to acquire data from the scanner in a variety of orientations. Overall, 40 observations were made for the Mid-40 and 24 for the OSI-64. Each of the observations consisted of collecting approximately 5 seconds of data. More observations were acquired for the Mid-40 due to its smaller field of view.
### Data Processing
After data acquisition, the 64 laser scans (40 for Mid-40 and 24 for OSI-64) first needed to be converted into a format suitable for display, processing and analysis. The Livox Mid-40 data acquisition software has a module that allows the export of raw scan data into an LAS format output file. However, the OSI-64 acquisition software has no such functionality. Therefore, custom script was written in C++ to convert the raw binary packets, saved in UDP (User Datagram Protocol) format, into an LAS file format using both the WinPCAP (www.winpcap.org) library and PDAL (point data abstraction library - www.pdal.io). The script can be obtained from at (github.com/pjhartzell/ouster-extract).
The output LAS files were then approximately oriented to the Riegl VZ-2000 dataset using the Alignment tools provided in the software package CloudCompare. The approximate alignments (rotation and translation) were exported from CloudCompare for each scan, and then PDAL was used to apply the transformations to the raw LAS point clouds to roughly reference all scans to a common reference frame.
The roughly aligned datasets were then amalgamated and used to extract a number of planar surfaces in a variety of orientations. Overall, 127 and 133 planes were selected from the Mid-40 and OSI-64 datasets respectively. The extracted planes were used in a least squares adjustment, using Equations 1 and 2 to determine refined scanner positions and orientations. The residuals from these adjustments were then analysed to determine the precision of each scanner and to investigate the presence of any systematic errors in the acquired datasets.
## 3 Results and Discussion
### Livox
The least squares adjustment of the Livox data contained 40 instrument set-ups in various locations with 127 observed planes. The final least squares adjustment considered 621,323 measurements on these planar surfaces. Statistics on the final residuals of the Livox points from the VZ-2000 reference planes are given in Table 3, and plots of these residuals w.r.t. various observables are given in Figure 6.
The 127 observed planes ranged in distance from 3 to 35 m from the sensor (top panel in Figure 6). The overall standard deviation of the adjusted point cloud is 1.8 cm, which is very
Figure 4: Photo of Data Collection Area in University of Houston Student Center. Yellow numbers indicate scanner set-up locations
Figure 5: Riegl VZ-2000 Point Cloud of Student Center, False HSV Colored by Planar Surface Normal Direction
Figure 3: Ouster OSI-64 Sensor (Source: www.ouster.com)near the Livox specification of 2 cm at 20 m (see Table 1). If 2\(\sigma\) outliers are removed (24,996 points or \(\sim\)4% of observations), then the overall standard deviation is 1.3 cm. Examination of the top panel in Figure 6 seems to imply that range residuals for the Mid-40 are larger at smaller ranges (i.e. \(<\) 20 m). In fact, if the standard deviation of the planar residuals are divided into two groups, those from ranges less than or greater than 20 meters, their standard deviations are 2.1 cm and 0.8 cm respectively - the Livox sensor appears to provide more accurate ranges at longer distances. A larger test field is required to determine if this lower noise level is consistent over the dynamic range of the instrument. Unfortunately, given the paucity of information regarding the hardware configuration of the Livox sensor, we are unable to provide a possible explanation for this sharp change in range precision.
The second panel (from top) in Figure 6 plots residuals with respect to angle of incidence on the planar surface. Here, the scatter plot of the residuals has a fairly uniform distribution up until \(\sim\)65\({}^{\circ}\), where there is a significant increase in the dispersion of the residuals. This behavior is consistent with other examinations of both autonomous laser scanners ([PERSON], [PERSON], 2010) and high-accuracy tripod mounted scanners ([PERSON], 2007), and is due primarily to laser beam divergence.
The middle panel in Figure 6 plots planar residuals versus intensity, where the intensity value is the raw reported value from the Mid-40, which is given as a unitless 8 bit value. Higher residuals are found below an intensity of \(\sim\)40. Again, this decrease in accuracy due to a lower SNR is common for laser scanners, see for example ([PERSON] et al., 2017), and therefore not unexpected. There does not appear to be any systematic error correlated with intensity.
The final two panels in Figure 6 show planar residuals with respect to vertical angle and horizontal angle. The angles were calculated based on the raw cartesian coordinates in the scanners own coordinate system reported in the raw data files. The intent of these plots was to examine if there was any location dependent distortion within the instrument field of view. However, an examination of residuals versus both horizontal and vertical angle does not show any obvious systematic trends. This observation was further examined by computing the average RMSE of residuals for a 0.5\({}^{\circ}\) square grid of horizontal and vertical angles (see Figure 7). Overall, no obvious systematic trends can be seen in the grid. It should be noted that ([PERSON] et al., 2019) observed a noise propagation visible in the point cloud, which they termed a \"ripple effect\", that appeared to be noise artefacts propagating outward when observing flat planes. We tested our Mid-40 instrument in a similar manner but were unable to duplicate their result.
Overall, the geometric performance of the Livox Mid-40 was satisfactory w.r.t. to the manufacturer specifications. Expected ranging precision appear to be met overall, and there does not appear to be any significant systematic distortions in the resultant point cloud that are correlated with the examined observables. Of course, a better understanding of the hardware would be required to say with confidence that no systematic errors remain in the scanner. Given our lack of knowledge of its operating principles there may be remaining systematic errors that we were simply unable to uncover given the lack of raw observations from the scanner.
### Ouster
The Ouster OSI-64 data least squares adjustment contained 24 instrument set-ups and 133 observed planes. The final least squares adjustment considered 413,765 measurements on these planar surfaces. Statistics on the final residuals of the Livox points constrained to the VZ-2000 reference planes are given in Table 3, and plots of these residuals w.r.t. various observables are given in Figure 8.
Figure 6: Livox Mid-40 Planar Residuals Standard Deviations Plotted Versus Range, Incidence Angle, Intensity, Horizontal Angle and Vertical Angle
Figure 7: Livox Mid-40 RMSE of Residuals (Color) in meters, plotted as a function of horizontal and vertical angle
The top panel of Figure 8 shows that the observed ranges for the OSI-64 data varied between 3 and 37 m. Within this range, the overall standard deviation of the planar residuals (given in Table 3) is 5.6 cm. If we remove the 2\(\sigma\) outliers from the Ouster results (approximately 21,833 points, or 5.3% of the observations), then the resultant standard deviation is 5.0 cm. The specifications for the OSI-64 gives varied range precision over the dynamic range of the instrument (see Table 2), but the relevant specifications when comparing to our results are a standard deviation of 1.5 cm and 3.0 cm for ranges from 2 to 20 m and 20 to 60 m respectively. Therefore as a direct comparison, the planar residual standard deviation was computed separately for these two range envelopes, and 6.7 cm (2 to 20 m) and 2.6 cm (20 to 60 m) were obtained. With this breakdown, for ranges above 20 m the OSI-64 appears to meet specifications, but for shorter ranges the computed standard deviation is \(>\)4 times the specification.
The second panel (from top) in Figure 8 plots residuals versus angle of incidence. For the OSI-64, the increase in residual standard deviations at larger (\(>\)70\({}^{\circ}\)) angles of incidence is not as apparent as for the Livox sensor. This is likely because the effect is masked by the overall larger residuals for the Ouster scanner. As a result, the increase is only readily apparent above \(\sim\)80\({}^{\circ}\).
The middle panel of Figure 8 plots raw intensity (as reported by Ouster scanner) versus planar residuals. Note that the OSI-64 reports intensity using a 16 bit scale. As would be expected, the lower accuracy observations are observed when the reported intensity is low. However, the drop in accuracy w.r.t. intensity is significantly more pronounced for the Ouster scanner, when compared to the Livox residuals in the center plane of Figure 6.
The bottom two panels in Figure 8 show planar residuals plotted versus horizontal angle (bottom) and vertical angle (second from bottom). The vertical angle figure shows a striped pattern - this is because of the configuration of the OSI-64 sensor. The sensor has 64 individual lasers pointed at fixed angles between \(\pm\)16.6\({}^{\circ}\), and therefore each vertical band corresponds to an individual laser in the sensor. The figure clearly shows that the lasers pointed between 0 and -10\({}^{\circ}\) in the scanners own coordinate system have significantly higher noise levels than the other lasers. This could be an indication of pointing errors for those individual lasers. Finally, the plot versus horizontal encoder angle shows potential sinusoidal systematic error correlated with angle - although likely hard to detect on the small panel plot in Figure 8. These systematic effects suggest that there are also potential calibration pointing errors in azimuth for the individual OSI-64 laser/detector pairs. The systematic error could also be due to a misalignment between the horizontal encoder and the spin axis of the sensor (see ([PERSON] et al., 2013) for a description of this error). Overall, the presence of systematic errors correlated to encoder angle and individual laser suggest the presence of an improper calibration or other error sources. The OSI-64 provides raw measurements of range, encoder angle, and a calibration file that enables a detailed analysis of unit calibration, similar to that done for the Velodyne sensor in ([PERSON], [PERSON], 2010). However, this detailed analysis is beyond the scope of this research and is left as a possible future research direction.
### General Remarks
When comparing the two sensors, it should first be noted that the scales of the graphs in Figure 6 and 8 are different. The y-axis limits for the Ouster datasets are double that of the Livox figure to account for the significantly higher noise level of the OSI-64. It is quite clear that overall the Livox sensor significantly outperforms the Ouster sensor. While the Livox sensor has a more limited field of view, its price point which is currently 20x cheaper than the OSI-64.
While the Mid-40 clearly outperforms the OSI-64, there is one large error source common to both scanners which is a direct consequence of their rather large beam divergence (when compared to survey grade terrestrial laser scanning systems). This is the inability of the sensors to accurately depict surface edges, as the large beam divergence causes an extended range envelope at edges. Examples of this effect are shown in Figure 9 below. The green data is from the VZ-2000, while the red points are from the Livox scanner. The figure clearly shows how both the light pole and the edge of the staircase are stretched in the final point cloud. This problem may not be a significant concern for autonomous vehicles, where it is more important to detect the presence of an object, but it would be of significant concern
Figure 8: Ouster OSI-64 Planar Residuals Plotted Versus Range, Incidence Angle, Intensity, Horizontal Angle and Vertical Angle
\begin{table}
\begin{tabular}{|l|c|c|} \hline & **Livox** & **Ouster** \\ \hline Minimum (m) & -0.406 & -0.971 \\ \hline Maximum (m) & 0.457 & 0.752 \\ \hline Mean (m) & 0.001 & 0.002 \\ \hline Std. Dev. (m) & 0.018 & 0.056 \\ \hline \# of \(\pm\)2\(\sigma\) Outliers & 24,996 & 21,833 \\ \hline Std. Dev. (m) w/o Outliers & 0.013 & 0.050 \\ \hline \# of Measurements & 621,323 & 413,765 \\ \hline \end{tabular}
\end{table}
Table 3: Statistics of Planar Residuals after Least Squares Adjustment, Livox Mid-40 and Ouster OSI-64 if the sensors were used for a primarily mapping or modeling task.
## 4 Conclusions
A rigorous least squares adjustment constrained to planar surfaces and a high accuracy terrestrial laser scan were used to investigate the geometric accuracy and systematic error sources of the Ouster OSI-64 and Livox Mid-40 lidar sensors. The geometric accuracy of the Livox Mid-40 laser scanner matched the manufacturer specifications for ranging accuracy. The system behaved as expected, showing increased planar errors for decreased lidar intensity returns and increased angle of incidence on the target. No significant systematic errors were found in the resultant point cloud. However, the system does not provide access to raw measurements (i.e., mirror angles and ranges), and information on the internal operation of the system, including scanning and ranging methods, was not available. Therefore, systematic errors may still be present, but were not correlated with the point cloud derivatives against which they were compared (e.g., range, polar angle, intensity).
On the other hand, the Ouster OSI-64 significantly under-performed when compared to its stated manufacturer specifications, with a ranging error that was almost double the stated accuracy. An analysis of the residuals identified possible systematic errors correlated with horizontal encoder angle, and several individual lasers which appeared to have poorer accuracy than the system aggregate. These errors, similar to those discovered for the Velodyne HDL-64E sensor in ([PERSON], [PERSON], 2010), point to the need for a rigorous geometric calibration of the OSI sensor to improve overall point cloud accuracy and consistency. Fortunately, the math model for the OSI-64 is provided by the manufacturer, along with access to the raw measurements that would enable such a calibration. A detailed geometric calibration of the Ouster sensor is an area of future research.
Finally, owing to the large beam divergences from each of the scanners, they were unable to properly model sharp edges and small features such as poles. While this may not be a problem for obstacle detection and avoidance use cases, the application of the sensors to mapping and modelling scenarios may require special filtering of the final point clouds to remove beam divergence artifacts.
## Acknowledgements
This research was partially supported by grants from the National Science Foundation Instrumentation and Facilities program (#1830734) and the U.S. Army Engineer Research and Development Center Cold Regions Research and Engineering Laboratory Remote Sensing/GIS Center of Expertise. [PERSON] is thanked for his assistance with the data acquisition for this manuscript.
## References
* [PERSON] (2012) [PERSON], 2012. Calibration and kinematic analysis of the velodyne HDL-64E S2 lidar sensor. _Photogrammetric Engineering & Remote Sensing_, 78(4), 339-347.
* [PERSON] et al. (2013) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2013. Compact Multipurpose Mobile Laser Scanning System -- Initial Tests and Results. _Remote Sensing_, 5(2), 521-538. [[https://www.mdpi.com/2072-4292/5/2/521](https://www.mdpi.com/2072-4292/5/2/521)]([https://www.mdpi.com/2072-4292/5/2/521](https://www.mdpi.com/2072-4292/5/2/521)).
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2016. CALIBRATION AND STABILITY ANALYSIS OF THE VLP-16 LASER SCANNER. _ISPRS Annals of Photogrammetry, Remote Sensing & Spatial Information Sciences_, 9.
* [PERSON] and [PERSON] (2010) [PERSON], [PERSON], 2010. Static Calibration and Analysis of the Velodyne HDL-64E S2 for High Accuracy Mobile Scanning. _Remote Sensing_, 2(6), 1610-1624.
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2019. Evaluation of UAV LiDAR for Mapping Coastal Environments. _Remote Sensing_, 11(24). [[https://www.mdpi.com/2072-4292/11/24/2893](https://www.mdpi.com/2072-4292/11/24/2893)]([https://www.mdpi.com/2072-4292/11/24/2893](https://www.mdpi.com/2072-4292/11/24/2893)).
* International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, XLII-2/W17, 233-240.
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], [PERSON], [PERSON], 2017. An intensity-based stochastic model for terrestrial laser scanners. _ISPRS Journal of Photogrammetry and Remote Sensing_, 125, 146-155.
Figure 9: Examples of Beam Divergence Issues with Livox and Ouster Scanners. Green is VZ-2000 data, and red is Livox data. Oblique view of staircase edges on left, and top view of a lightpole on right
|
isprs
|
ACCURACY ASSESSMENT AND CALIBRATION OF LOW-COST AUTONOMOUS LIDAR SENSORS
|
C. L. Glennie, P. J. Hartzell
|
https://doi.org/10.5194/isprs-archives-xliii-b1-2020-371-2020
| 2,020
|
CC-BY
|
isprs/866dfd9d_d196_4607_b640_076d7e6e54e7.md
|
# Automatic Rail Extraction and Celarance Check With a Point Cloud Captured by MLS in a Railway
Y. [PERSON]
R. [PERSON]
Y. [PERSON]
K. [PERSON]
K. [PERSON]
T. [PERSON]
E. [PERSON]
###### Abstract
Recently, MLS (Mobile Laser Scanning) has been successfully used in a road maintenance. In this paper, we present the application of MLS for the inspection of clearance along railway tracks of West Japan Railway Company. Point clouds around the track are captured by MLS mounted on a bogie and rail position can be determined by matching the shape of the ideal rail head with respect to the point cloud by ICP algorithm. A clearance check is executed automatically with virtual clearance model laid along the extracted rail. As a result of evaluation, the accuracy of extracting rail positions is less than 3 mm. With respect to the automatic clearance check, the objects inside the clearance and the ones related to a contact line is successfully detected by visual confirmation.
MLS, Railway, Rail Extraction, Clearance Check 2020
## 1 Introduction
Rail transportation system guided by fixed railway cannot avoid obstacles on railroads. Therefore, it is essential to detect objects, such as trees, signal, buildings and any structure (temporal and/or permanent) with inversion into the space moving through, called \"clearance gauge\".
Currently, the presence or absence of obstacles inside the clearance gauge is confirmed with a device that the operator measures with like a wheelarrow, or the operator visually checks from the front of the running train, but it is difficult to accurately perform over the entire track.
In this paper, we report about the automatically clearance checking algorithm by classifying Mobile Laser Scanning (MLS) point clouds in respect of railroad environment, and also for an automatic rail extraction necessary for this clearance check.
## 2 Related Work
Prevailing LiDAR technology, classification methods of point clouds have been also focused on for several years. Enormous point data are acquired by airborne (ALS: Airborne Laser Scanning), terrestrial (TLS: Terrestrial Laser Scanning), and mobile (MLS) regardless manned or unmanned, various practical application are proposed for facility management. In the field of railway transportation, several cases of introduction or utilization of MLS are as follows.
[PERSON] et al (2012) extract rail from a point cloud captured by ALS and TLS. In their experiment, points around rail track are classified into various classes, however a track geometry is not detected.
[PERSON] et al (2013) describe the method to extract rail from a point cloud by MLS. They propose extraction algorithm consisting of two steps, first detecting rough position of rails based on knowledge about a railway, then specifying detailed position by fitting a general 3-D rail model. Finally, a line connected with extracted rail positions is smoothed by curve fitting. The accuracy of extracting rail positions is about 2 cm. The problem is that it is necessary to apply the shapes of different models in a special rail such as a switch, and processing time is long.
[PERSON] et al (2014) perform an experiment of detecting rail track geometry by using TLS. They apply the ICP algorithm ([PERSON] et al (1992)) for matching CAD model rails and a point cloud to extract rails. Their results show a difference of about 2.5 mm between the truth and the extracted rail position.
[PERSON] et al (2016) present an experiment of measuring rail track and checking clearance with two types of MLS. It concludes that clearance check can be performed with an accuracy of 2 cm to 3 cm, but it does not mention the details of extracting rails.
[PERSON] et al (2017) perform an automatic clearance inspection of railway tunnel with MLS point cloud. They conclude that the accuracy of detecting rails is within 3 cm and its method can meet the requirement of clearance inspection.
In this paper, we developed a high speed and high precision rail detection algorithm. To verify the accuracy, a comparison between the rail measurement data with track geometry car that can acquire high precision track measurement data and automatic extracted rail data by the proposed algorithm is executed continuously in a 5 km section. Additionally, in the case of the clearance check, we propose an algorithm to detect the object within a clearance gauge excluding overhead lines and catenary equipments (hereinafter called \"catenary equipments\" collectively) in the electricity section.
## 3 Algorithm
### Automatic rail extraction
Figure 1 shows the criteria for the clearance gauge in West Japan Railway Company (This is called \"JR-West\") for a conventional line. In order to detect an object inside its range using by a 3-D point cloud space, it is necessary to accurately set the frame of this clearance, but that frame position is defined on the bases of the track center line and rail position. Therefore, we tried to develop an algorithm of extracting rail position from the point cloud, prior to further processing.
Figure 2 shows the overall processing of the algorithm of rail position extraction. In this proposed method, the rail position is extracted at a constant interval \(\Delta D\).
Firstly, a point cloud is clipped with a width L in every \(\Delta D\) along the track with a trajectory derived from MLS.
Secondary, the clipped point cloud of a width L is projected to the plane that is vertical to the trajectory direction. Next, the position of the gauge corner (hereinafter, this is called \"GC\"), that is the inside corner of a rail head, is extracted by matching the shape of the ideal rail head with respect to the projected point cloud by adopting the ICP algorithm. The reason for using the shape of rail head instead of whole shape is avoiding miss matching of GCs' height due to a rail wear.
After extracting GCs, the point of the track center, the gauge (distance between bilateral rails) and the cross level (height difference between bilateral rails) are calculated by extracted right GC and left GC (Figure 3). Finally, rail geometry and track center line are defined as connected each GC positions and each points of track center sequentially.
In this experiment, we set parameters for clipping range on the projection of point cloud as \(L\)=0.2m and \(\Delta D\)=1.0m, considering the influence of steep curves and gradients of track geometry.
### Automatic clearance check
Detection of objects inside the clearance gauge is executed by fitting the virtual frame of the clearance gauge to the 3-D point cloud space. The position of the clearance gauge is automatically derived from track center line extracted by the method of Section 3.1 and checking the presence or absence of points in the frame.
Figure 1: The clearance gauge of a conventional line in JR-West
Figure 3: Track center, gauge and cross level
Figure 2: The overall algorithm of the rail extractionFigure 4 shows the clearance frame extruded along the extracted rail center line with distance \(\Delta D\).The points within the space surrounded by the surfaces are classified as obstacle objects. This process is performed in sequence, then it is possible to confirm the clearance check in all sections.
A clearance gauge in a curve section is to expand the left and right widths, and in JR-West it is defined as follows.
\[\text{W}=\frac{23100}{R} \tag{1}\]
Where \(W=\text{expansion width }[\text{mm}]\)
\(R=\text{curve radius }[\text{m}]\)
We can set \(R\) to the value defined in the facility register for calculating \(W\), however in this method we apply the one that is calculated from the geometry of rail center line by extracted rails automatically. Curve radius is calculated by using the center position of the point of interest and the center position of the part 50 m away in the front and back.
In case of an electrified section in a railway, catenary equipments are inevitably contained within the clearance gauge, therefore it is necessary to ignore them while checking clearance.
Our proposed method can exclude the points related to catenary equipments from clearance judgment by PCA (Principal Component Analysis), RANSAC (RANdom SAmple Consensus) and region growing method. Figure 5 shows that flow.
## 4 Data Acquisition
In this experiment, we attempted to acquire a point cloud around the rail track by MLS on a bogie pulled by a motor car. Table 1 shows the specification of the MLS and Figure 6 shows the condition of this measurement test. Figure 7 shows an example of the obtained point cloud. As a result of visual observation of the point cloud, we found that not only the track but also many structures and facilities such as station platforms, tunnels, signals, bridges, indicators and contact lines were clearly acquired.
condition of the rail extraction by physical confirmation. Figure 8 and Figure 9 indicate an example of the rail extraction.
As a result of the confirmation, the rail position could be extracted within about 10 mm error in many sections.
In order to perform further quantitatively evaluation of the accuracy of the extracted rail positions, we attempted to compare with the calculated gauges and the cross levels from the MLS point cloud and measured ones with track geometry car that belongs to IR-West in the section of 5 km.
For this evaluation, it is necessary to select a section where the GNSS reception status of MLS is good for the purpose of matching both kiloposts, and it is also desirable to evaluate the difference due to various line geometries. Figure 10 indicates the slopes and curves of test section that satisfies the above conditions.
An example of cross section of rail extraction position is shown in Figure 13. The rail position is detected in many sections regardless of the linear line and curved section ([a][b]). On the other hand, there are some parts where the rail is detected at an inappropriate position because the variation of the point group on the top of the rail head is large ([c]). Additionally, at the railroad crossing it is completely failed to extract the rail ([d]), therefore it is necessary to apply another algorithm at the point.
With respect to the automatic clearance check, we also performed the processing with \(\Delta D\)=1m. As a result of visual confirmation, it was found that the points inside the clearance gauge and points related to catenary equipments could be successfully detected. Figure 14 shows an example of the extraction situation.
Table 3 shows the result of comparing the average value of the curve radius calculated in every 1 m with the method using three center positions at 50 m intervals and the nominal value of the curve radius in four curve sections included in the test section. Although the gap from the nominal value is about 4 % in the curve C, the difference in the expanding amount of the clearance gauge
## 6 Conclusion
In this paper, we discussed the algorithms for automatic rail extraction and clearance check on rails with MLS point cloud. As a result of the verification, for gauges and the cross levels, the standard deviation of the gap between the calculated values by this algorithm with MLS point cloud and the highly accurate measurement values was less than 3 mm, which can be said to be a good result. In addition, clearance check was also successful by visual inspection.
MLS can be effective tool not only as a clearance checking device but as a monitoring system for facilities of a railway if MLS point cloud is able to detect secular changes of the shape of facilities that are difficult to be found with physical inspection.
In the future, we have a plan to further develop MLS for railway operations.
## References
* [PERSON] (1992) [PERSON] and [PERSON], 1992. A method for registration of 3D shapes, IEEE Trans. On PAMI, 14(2), pp.239-256.
* [PERSON] et al. (2012) [PERSON] et al., 2012. Automatic 3D modelling of train rails in a lidar point cloud, MSc thesis, University of Twente Faculty of Geo-Information and Earth Observation (ITC).
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], [PERSON] and [PERSON], 2016. Mobile Laser Scanning Systems for Measuring the Clearance Gauge of Railways: State of Play, Testing and Outlook, Sensors, 16(5), 683.
* [PERSON] et al. (2013) [PERSON], [PERSON], [PERSON], [PERSON], 2013. Rail track detection and modelling in mobile laser scanner data, ISPRS Annals of the Photogrammetry. Remote Sensing and Spatial Information Sciences, Volume II-5/W2, pp.223-228.
* [PERSON] et al. (2014) [PERSON], [PERSON] and [PERSON], 2014. Extracting rail track geometry from static terrestrial laser scans for monitoring purposes, The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XL-5, pp.553-557.
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], 2017. Railway Tunnel Clearance Inspection Method Based on 3D Point Cloud from Mobile Laser Scanning, Sensors, 17(9), 2055.
Figure 14: An example of extracting points inside the clearance and points related to catenary equipments
Figure 15: The relationship between distance and curve radius in curve A
|
isprs
|
AUTOMATIC RAIL EXTRACTION AND CELARANCE CHECK WITH A POINT CLOUD CAPTURED BY MLS IN A RAILWAY
|
Y. Niina, R. Honma, Y. Honma, K. Kondo, K. Tsuji, T. Hiramatsu, E. Oketani
|
https://doi.org/10.5194/isprs-archives-xlii-2-767-2018
| 2,018
|
CC-BY
|
isprs/ca7caed1_2761_4c1d_aabb_b9a2ff93d08c.md
|
# 3D interpretation and fusion of multidisciplinary data for heritage science: a review
[PERSON] 1 *, [PERSON] 2
1 Department of Computer Science, Universita degli Studi di Torino, Corso Svizzera 185, Torino, Italia 2 Department of Architecture and Design, Politecnico di Torino, Viale Mattioli 39, Torino, Italia [EMAIL_ADDRESS], [EMAIL_ADDRESS]
Footnote 1: [[https://doi.org/10.5194/sprs-archives-XLI-2-W15-17-2019](https://doi.org/10.5194/sprs-archives-XLI-2-W15-17-2019)]([https://doi.org/10.5194/sprs-archives-XLI-2-W15-17-2019](https://doi.org/10.5194/sprs-archives-XLI-2-W15-17-2019)) / Authors 2019, CC BY 4.0 License.
###### Abstract
Activities related to the protection of tangible heritage require extensive multidisciplinary documentation. The various raw data that occur have been oftentimes been processed, visualized and evaluated separately leading to aggregations of unassociated information of varying data types. In the direction of adopting complete approaches towards more effective decision making, the interpretation and fusion of these data in three dimensions, inserting topological information is deemed necessary. The present study addresses the achieved level of three-dimensional interpretation and fusion with geometric models of data originating from different fields, by providing an extensive review of the relevant literature. Additionally, it briefly discusses perspectives on techniques that could potentially be integrated with point clouds or models.
heritage, data interpretation, modelling, multi-sensor, multi-spectral, data fusion +
Footnote †: This contribution has been peer-reviewed.
[[https://doi.org/10.5194/sprs-archives-XLI-2-W15-17-2019](https://doi.org/10.5194/sprs-archives-XLI-2-W15-17-2019)]([https://doi.org/10.5194/sprs-archives-XLI-2-W15-17-2019](https://doi.org/10.5194/sprs-archives-XLI-2-W15-17-2019)) / Authors 2019, CC BY 4.0 License.
## 1 Introduction
The necessity of tangible heritage documentation and analysis, has been emphasized multiple times through the international agreements and conventions, including the Lausanne Charter (ICOMOS, 1990) and the Krakow Charter (ICOMOS, 2000). Additionally, the collection and exploitation of information of different disciplines has been underlined as a means to effectively interpret heritage objects and to plan the preservation and conservation treatments, as mentioned in [PERSON] et al. (2015). As discussed in [PERSON] et al. (2012), the recording of cultural heritage objects should be thorough, multiscale and cover a wide scope of information, so that it can assist inspection, diagnosis, intervention studies, pilot and final intervention works, as well as assessment and monitoring processes. The multidisciplinary recording and analysis of tangible heritage usually refers to archaeological, architectural, morphological and structural surveys, investigation procedures concerning building materials, decay, past interventions and state of conservation ([PERSON] et al., 2003). That translates to the collection of metric, scanning, spectral, chemical, geophysical, construcational and climatic data ([PERSON] et al., 2017 and [PERSON] et al., 2017).
Faults of the past regarding fragmentary approaches, and aggregation of big data about heritage not integrated which led to incompatible interventions, have resulted to a realization that holistic, interdisciplinary, continuous documentation, enables an accurate decision-making concerning built heritage protection. These experiences led to understand that the greater the level of integration of heritage data, the more useful are the results. As presented in [PERSON] et al. (2014) and [PERSON] (2015) data fusion in cultural heritage can be performed at different levels. This paper gives an overview of the different fusion approaches adopted in the heritage science field to model data from different techniques and specifically Laser Scanning, InfraRed-Thermography (IRT), Spectral and Multi-Spectral Imaging, Ultrasonic Testing (UT), Ground Penetrating Radar (GPR), X-Ray Radiography and Computed Tomography in combination with geometric data or relevant three-dimensional approaches that have been developed to interpret them, since those originating from non-imaging fields have very rarely been fused with 3D models produced by using 3D point clouds offered by sensing techniques aiming to 3D geometry.
## 2 Laser scanning
Terrestrial lasers scanners (TLSs) utilize electro-magnetic waves to measure distances, usually at the infrared range (wavelength ranging from 1 mm to 700 nm). These sensors are not only useful for the rapid three-dimensional recording of large volumes of geometric data, but also collect reflectance intensity information that depend on the characteristics of materials. As the attribution of raw intensity data is point based and takes place during data acquisition, no further fusion techniques are usually needed. Although, as demonstrated in [PERSON] and [PERSON] (2018) the correction of TLS intensity data with data-driven models is a necessary step for close range applications before data interpretation. Then TLS intensity data can be visualized as scales of grey or pseudo colours on point cloud vertices, mesh faces -using any common texturing algorithm- or on ortho-image pixels. Since different building materials and deterioration products present different radiometric properties, intensity data can be exploited to obtain classification results, that otherwise would be achieved through time and cost-consuming manual inspection and analyses.
[PERSON] et al. (2010) performed unsupervised classification on ortho-images with intensity data from a terrestrial laser scanner (TLS) to identify weathering on stone walls at the ruins of the Santo Domingo, located in the heart of the city of Pontevedra, Galicia (Spain). [PERSON] et al. (2015) evaluated the unsupervised classification of intensity ortho-images of Villamayor Stone in Salamanca (Spain) and were able to identify pathologies such as humidity and biodeterioration, asess the total surface affected by rock damage and measure the rock surface involved in scaling and fissuring. Additionally, [PERSON] et al. (2016) used intensity orthoimages produced from LIDAR data for automatic morphologic analysis-geometric segmentation of Quasi-Periodic masonry walls in Guimarales (Portugia). For accurate studies of the materials and deterioration, raw intensity information can be converted to surface reflectivity data through radiometric calibration of the instrumentation to enable quantitative and qualitative multitemporal assessment. Radiometric calibration depends on scanning geometry, materialsurface properties and specifics of the instrumentation. [PERSON] et al., (2016) used corrected reflectivity data from TLS to produce textured partial models of the Cathedral of Ciudad Rodrigo, at the province of Salamanca (Spain) to detect moisture. [PERSON] and [PERSON] (2018) utilized point clouds with reflectivity data to assess damage on historic Chinese structures. Furthermore, [PERSON] et al. (2018) used models and extracted orthoimages with reflectance values -after radiometric calibration of TLS- to map moisture, salt crusts and biological colonization on the San Francisco Master Gate of the Almeida Fortress (Portugal).
## 3 Infrared thermography
Thermography is an imaging technique of the infrared range with thermographic cameras usually being able to detect radiation of 9 \(\upmu\)m-14 \(\upmu\)m. In heritage science IRT is usually associated with assessment of the state of preservation of historic structures to identify pathologists related to moisture. A defect in this method is that it is affected by environmental factors and thus specialized calibration is required to acquire accurate results. Additionally, compared to consumer-level digital cameras, thermal cameras have significantly lower resolutions commonly ranging from 160 x 120 to 640 x 480 pixels. Thus, sensing and interpretation of thermal characteristics of historic structures requires collection of large numbers of thermal images ([PERSON] et al., 2015). Since the manual analysis of these large datasets is very difficult, thermal modelling has often been explored as a more complete means for thermographic inspection which additionally provides invaluable geometric information for the damaged areas.
A simple method that allows the performance of geometric and thermal measurements at the same product simultaneously is photogrammetric image rectification which requires calibration of the thermal camera ([PERSON] et al., 2016). A common method of thermal modeling is the separate reconstruction of the geometry through laser scanning or photogrammetry and then the fusion with thermal data with a manual D2-to-3D registration. Some of the earliest examples of fusing geometric and thermal information for heritage purposes include [PERSON] et al. (2007) and [PERSON] et al. (2009) who both used corresponding points coordinates to estimate the orientation of thermograms for Palazzo Barbieri in Verona, Italy and for a tomb in Petra, Jordan respectively. The thermal images where then used to texture photogrammetric models. [PERSON] et al. (2015) also used corresponding point coordinates to calculate external orientation parameters of individual thermograms. Thermograms where then used to texture parts of models created with TLS for historical buildings in Cosenza, Italy. The resulting model and the extracted orthoimages were used to identify cracks, detachments and zones of moisture. [PERSON] et al. (2013) performed a combined bundle adjustment of optical and thermal image datasets, using some manually selected common tie points, to obtain better estimations of the external parameters for the thermal images. Thermal images textured the reconstructed geometry to facilitate the archaeological observations.
Another approach for thermal modelling refers to platforms with multiple sensors whose relative positions have been calibrated. [PERSON] et al. (2011) set up bi-camera systems with optical and thermal cameras. By knowing the orientations of the optical cameras -after constructing point clouds for heritage buildings and the relative orientation of the different cameras, were then able to texture point clouds with thermal data. [PERSON] et al. (2013) developed a robotic system equipped with laser scanner, webcam and thermal camera that simultaneously collected geometric, optical and thermal information and through a calibration algorithm could automatically determine the relations between the sensors, to directly color the acquired point clouds of facades with texture. [PERSON] et al. (2018) created a bi-camera system consisting of TLS, optical and thermal camera calibrated in order to obtain the geometric relationships between them. By associating the pixels of the thermal image to the single points through a projective transformation matrix were able to assign temperature values to the point cloud. Therefore, they were able to produce a thermal point cloud for the Baritel de San Carlos in Almedenejos, Spain.
Recently, cloud-to-cloud and model-to-model registration options have also been explored. [PERSON] et al. (2018) co-registered separate point clouds of building facades created from optical and thermal images to acquire the orientation parameters of thermal image acquisition calculated by back projection. Then textured the model generated from the point cloud through commercial software. [PERSON] and [PERSON] (2018) explored three methods for model and point cloud-based matching to improve the orientation parameters of thermal images and therefore calculate their projection into the models. The first method matched a relative point cloud to the model of a building using GNSS data of the camera track. The distance between them was adjusted with a least-squares adjustment. The second method induced an existing model into the camera orientations to extract a point cloud directly from the images. The third method produced a point cloud from the images including the orientations of the aerial platform used and co-registered it to a point cloud from the optical images. In all cases, a thermal image mosaic was generated to texture the model of facades.
Finally, automated or semi-automated photogrammetric software implementing Structure from Motion (SfM) and Multiview Stereo (MVS) algorithms which is becoming extremely popular for heritage applications has been recently explored for thermal modeling. [PERSON] et al. (2017) provides one example of the use of commercial SIM MVS software to produce a thermal model of Assis Ancient theater in Behramkale, West Turkey, directly from thermograms captured with a low-resolution thermal camera mounted on a UAS platform.
## 4 Spectral and multi-spectral imaging
Multispectral imaging is a form of non-invasive imagery. Based on the detection of reflected or emitted electromagnetic radiation with wavelengths ranging between 10 nm-1 mm, this technique has been used extensively for archaeological projections ([PERSON], 2018), characterization of historic materials and pathologies ([PERSON] et al., 2016). As in the case of infrared thermo-cameras, multispectral cameras are usually of lower resolutions than those of consumer-level digital cameras, although substantially higher then thermal ones. The approaches explored for multi-spectral modeling of heritage include simple 2D-to-3D registration with common points between the geometric model produced by photogrammetric or scanning workflows, as performed in [PERSON] et al. (2017) for heritage buildings and more complicated solutions. [PERSON] et al. (2006) developed a system for automatic multispectral modeling of historic architecture by integrating a range camera and an image spectrograph. The automatic implemented procedure included: automatic detection of a common region between overlapping textured 3D views, pairwise registration of 3D views, global registration, multi-spectral texture construction and surface fusion. [PERSON] et al. (2012) and [PERSON] et al. (2013) applied photogrammetric tracking on pre-calibrated multispectral cameras and fringe projection systems for 3D digitization used on the same scenes, to co-register and project the multi-spectral data on 3D models of heritage surfaces, with accuracy better than half an image pixel. [PERSON] et al. (2009) and [PERSON] et al. (2009) extracted depth maps from 3D heritage models, whose pixels maintained correspondence with the vertices of the respective 3D models, to register those maps with texture from multispectral images through Maximization of Mutual Information algorithms. Commercial automated or semi-automated photogrammetric software which utilizes SIM algorithms has also been recently explored for heritage multispectral modeling, as in the works of [PERSON] et al. (2017) for triplets of images at different wavelengths, [PERSON] et al. (2018) for larger datasets and [PERSON] et al. (2018) for visible spectrum and Ultraviolet Luminescence images of a vase. The availability of digital cameras modified for UV, IR and multispectral imaging has further widened the application of this type of photogrammetric software, since higher resolutions become available that facilitate the common feature detection at the first step of shape reconstruction. Two examples are provided by [PERSON] et al. (2016), who performed large scale mapping of an archaeological excavation site using datasets of images acquired with a camera optimized for near infrared imaging mounted on an unmanned aerial platform with two commercial photogrammetric software and [PERSON] et al. (2018), who also used a near infrared modified camera and commercial SIM photogrammetric software to model a mango wooden vase. Another workflow that produces models of metric quality with spectral information includes separate orientation of the optical and non-optical images acquired for the same scene, so that the final 3D surface can be produced more precisely by the optical images and then texturized accordingly using the non-optical images.
## 5 Ground-penetrating radar
GPR methods consist non-destructive geophysical prospection techniques utilizing waves with frequencies between 10 MHz-2.5 GHz. The method is based on the transmission of electromagnetic pulses into the ground or into a structure and the recording of the signal that is reflected to the surface and carries information about the position of underlying media with different dielectric properties ([PERSON] et al., 2009). In the field of cultural heritage GPR is increasingly used for archaeological prospection ([PERSON], 2003), the investigation of the inner structure of columns, buttresses and walls ([PERSON] et al., 2013), the diagnosis of damaged zones ([PERSON] et al., 2000) and the locating of moisture ([PERSON] and [PERSON], 2018). The most common reflection acquisition methodology is single-offset, because it is the simplest and fastest way to collect data, by moving two antennas with constant distance between the transmitter and the receiver to produce 2D sections. In many GPR surveys, usually in archaeological prospection, this Common Offset procedure is repeated at regular intervals and for several survey lines, which are usually located parallel to one other. Using three-dimensional Common Offset, more realistic representations of underground space are provided, which allows not only the location, but also for reconstruction of buried structures. These 2.5D approaches (or time-slices) provide accurate and intuitive display of the underground distribution, and adequate spatial correlation between reflectors in varying depths. They can be further processed to calculate 3D volumes.
One of the main challenges of this technique is the interpretation of the collected data, that depends on the quality of the raw measurements and the knowledge of the inspected materials with the corresponding dielectric properties. To produce a satisfactory interpretation, the methods of data acquisition and processing are therefore crucial ([PERSON] and [PERSON], 2014). As can be seen from the examples set forth below, several published studies in this direction, over the last decade, showcased many different techniques for the acquisition and processing of GPR data to achieve better interpretations in three-dimensional space. [PERSON] et al. (2017) collected measurements along parallel survey lines with 25 cm cross-line spacing and processed them into multiple GPR depth slices to map the remains of Bronze-Age settlements. Following the same technique, [PERSON] et al. (2004) used time-slices to study a Roman Forum in the Middle Tiber valley north of Rome, [PERSON] et al. (2009) used time-slices to study an archaeological site of Naya in the valley of Buyuk Menderes River south-east of Izmir and [PERSON] et al. (2013) to conduct the archaeological prospection of a part of the Aquileia Park in North-West Italy. [PERSON] et al. (2002), [PERSON] et al. (2009) and [PERSON] et al. (2010) further interpreted the results to produce 3D volumes from same amplitude data deriving from the GPR surveys to investigate large archaeological sites in Lecce, Italy and in Ankara, Turkey respectively. Furthermore, [PERSON] et al. (2011) and [PERSON] et al. (2014) also produced 3D volumes from iso-amplitude data to assess internal damages, inner pipes and void areas in columns. [PERSON] and [PERSON] (2015) created a pseudo 3D-reconstruction of moisture inside a heritage building by extracting an iso-surface from the time-slices. The only paradigm of 3D fusion of GPR data with heritage models until this research, comes from [PERSON] et al. (2017), who used the georeferenced sections from the GPR data of the internal structure of the Tomb of Christ within the Church of the Holy Sepulchre in Jerusalem, as contours to produce a 3D surface model, which was merged with the 3D model of the interior and the exterior of the structure, thus creating a complete model of the stratigraphy of the historical phases.
## 6 Ultrasonic testing
UT methods refer to non-destructive prospection techniques utilizing waves with frequencies higher than 20 kHz. These techniques are similar to acoustic and electromagnetic methods, but they can be used for larger scale applications and produce higher resolution results. Shorter wavelengths translate to higher resolutions, meaning that smaller targets can then be detected accurately. UT is based on the transmission, reflection and recording of ultrasonic waves. The emitted energy propagates through the medium and scattered energy is detected by a transducer. The records are interpreted in images of the inner medium which include changes in the elastic parameters ([PERSON] et al., 2016). UT can be used for the calculation of the thickness of crusts or other weathering layers and of the depth of cracks developed in building materials, evaluation of internal damage and decay, assessment of mechanical characteristics, the quality and the homogeneity of components and the location of reinforcements ([PERSON] and [PERSON], 2014). Ultrasonic testing is not a newly introduced from non-destructive testing of tangible heritage as it has been applied in a plethora of case studies on structures ([PERSON] et al., 2012, [PERSON] et al., 2018 & [PERSON] and [PERSON], 2015) and sculptures ([PERSON] et al., 2012). [PERSON] et al. (2017) were able to carry out a three-dimensional representation of the individual ultrasound velocity measurements inside an imperial marble statue of Alba-la-Romaine (France), by relying on a 3D model that included small numbered stickers on the marble surface, which materialized the positions of the emitter and receiver for each measurement UT images consist of pixels representing finite, discrete, small areas of the heritage object and are associated with values of intensity. Calculations carried out for each pixel allow a quantitative description of physical characteristics, such as the velocity of wave propagation. The number of pixels can be changed, so that the effect of spatial resolution on the quality of the image can be studied. Ultrasound tomography images acquired through a dense configuration of transmitters and receivers, attributed with different heights have been used by [PERSON] and [PERSON] (2018) to assess the three-dimensional internal structure of various masonry pillars. Similarly, the internal conditions of building stone materials of the investigated architectural elements were represented in a 3D view of intersecting tomographic slices by [PERSON] et al. (2018) for the Palazzo di Citta building in the historical centre of Cagliari. Additionally, tomographic images were superimposed on visual geolocated orthoinages and decay maps by [PERSON] et al. (2017) to correlate historic materials and degradation processes.
The geolocated or spatially correlated ultrasound tomographic images and ultrasonic velocity measurements can be converted in 3D volumes through specialized software as in the cases of [PERSON] et al. (2018) that was mentioned before, [PERSON] (2017) for the depiction of the internal state of conservation of historic massories subjected to weathering and [PERSON] et al. (2017) for the representation of the conservation state of limestone blocks affected by fire. 3D tomographic results produced by GeoTom CG software were superimposed over 3D topographic models in [PERSON] et al. (2013) for the [PERSON] statue and in [PERSON] et al. (2015) for two sculptures of the Egyptian museum of Turin, to better visualize weak volumes and changes in elasticity attributes. Conclusively, [PERSON] et al. (2017) performed a high-level integration of textured meshes, extracted from point clouds classified according to transmission velocities of ultrasonic waves inside an Egyptian sculpture at the archaeological museum of Bologna, aiming to enhance the final graphical representation of the tomographic results and to subject the data regarding conservation state to quantitative analysis.
## 7 X-Ray Radiography and Computed Tomography
XRR is a non-destructive investigative technique, originating from the medical sector. It involves beams of radiation with wavelengths ranging from 0.01 nm-10 nm emitted by a source of electromagnetic radiation, that are partially absorbed by an object depending on density and compositional characteristics and captured by detectors behind the object to create two dimensional images. It is a fast and cost-effective technique which enables to capture cross-layer views of objects obscured under corrosion layers or burial accretions without intervention. XRR was the first technique that allowed to obtain information about sub-surface structures and is thus used extensively for the study of metalwork ([PERSON] et al., 2015; [PERSON] et al., 2016), wooden sculptures ([PERSON] et al., 2013), paintings ([PERSON] et al., 2017) and textiles ([PERSON] and [PERSON], 2007). However, regarding the three-dimensional perspective of the technique, the information depicted in every pixel of a radiograph image represent absorption data collected of an x-ray cone, that is emitted from the radiation source at one side of the object and is recorded from the detector at the other side, includes data for all intersecting internal surfaces of the object and not only the external surfaces and thus, result in a digital product that cannot be treated as a central or an ortho-projection of intensity data.
That means that the radiography cannot be projected to a point cloud or model of an object without significant errors resulting from the 'projection' and cross-sectional information included, especially when the object has great depth variations. Thus, the integration of x-ray imaging products could only be integrated with models of relatively thin and planar objects, to not create any misconceptions referring to the type and geometry of data acquired through this technique. Furthermore, the results of XRR are affected by chemical nature, thickness and density of the object under study and therefore objects of substantially high density, as for example objects, made completely of rock or metal, more than a few cm thick, cannot be investigated because of the depth that this type of radiation can penetrate. In these cases, instrumentation of higher radiation intensity needs to be used in large facilities.
CT is also a powerful non-invasive and non-destructive X-Ray technique, yet allows the full volume visualization of an object, giving away morphological and physical information. The data acquired from CT in the raw form consist of a list of voxels with known coordinates in a three-dimensional axis system and the respective x-ray absorption values. The dimensions of the voxels represent the resolution of the acquisition process, with a range that can vary from a few microns to some millimeters. The data can be translated in 2D cross-sections or a 3D cloud and software has been developed for the visualization but not the management of the point clouds. The acquisition and post processing are relatively fast, so that for a small object it would take less than a day to produce results with a resolution around 1 mm. The main drawback of CT much like XRR remains; the objects under study should be movable and of relatively small dimensions. Although, during the last years instrumentation has been developed to accommodate the tomography of large paintings, sculptures of human dimensions and archaeological findings of big volumes.
[PERSON] et al. (2014) used such CT instrumentation on a large cabinet made by [PERSON] of rare wood, polychromic wories, nacre androtesshell to find building techniques, state of preservation and previous interventions. [PERSON] et al. (2015) performed CT on a soil block with archaeological interest to assess the state of conservation of a fragmented metal belt, in order not to be destructed during micro-excavation and to recreate its shape after cleaning. More specifically, segmentation of the produced point cloud was performed to separate virtually the finding from the containing soil. [PERSON] et al. (2016) used CT to study the cover of an ancient Egyptian coffin. [PERSON] et al. (2014) also executed CT using modified equipment to 3D reconstruct the volume and the inner structure of a contemporary or wooden sculpture. [PERSON] et al. (2007) and [PERSON] et al. (2010) used a transportable X-ray CT system to create 3D reconstructions and virtual slices for an ancient wooden globe and for two wooden Japanese sculptures (13\({}^{\text{th}}\) and 18\({}^{\text{th}}\) century) respectively. Additionally, [PERSON] et al. (2012) used synchrotron radiation CT facilities to perform a detailed 3D reconstruction of a Boxwood prayer nut depicting the crucifism of [PERSON], to virtually segment its model through open source software and 3D print it to construct a physical copy, which can be manipulated and examined, without any risk of damage to the original. Finally, [PERSON] et al. (2008) performed a high-level fusion of data produced with Photogrammetry, \(\mu\)-topography and X-ray CT to produce a complete 3D model of the Cylinder seal of Ibn-Sharnum from the Agade period to minimize noise throughout the whole surface.
## 8 Conclusions and Future Perspectives
This paper has provided a comprehensive review on the different methodologies used for 3D interpretation of multisensory data used for heritage science purposes. All mentioned techniques are non-invasive and non-destructive and therefore are deemed optimal for cases of small, fragile or high value historic materials. Additionally, the relevant instrumentation is portable for all the above-mentioned techniques and can be used for in-situ studies, except for the XRR and CT which can only be transportable from one laboratory to another.
As discussed here, the imaging related techniques can easily produce point clouds or models with texture useful for recording or diagnostic reasons. On the other hand, techniques based on electromagnetic waves far from the optical spectrum can usually provide information about the internal characteristics of materials and cannot be easily translated to shapes or need to be further fused with geometric information deriving from other techniques to acquire some topological validity. Even for imaging data of Ultra-Violet and Infra-Red spectra, 3D interpretation approaches have been concentrated more around 2D image to 3D registration and more recently model-to-model or cloud-to-cloud registration. These approaches aim to maintain the level of geometric detail provided by reconstructions from scanning or visible spectrum photogrammetry and texturize them with the different spectral imaging data. Thus, it should be highlighted that most problems regarding interpretation of heritage information from different sensors are a matter of resolution. Although, some contemporary SfM commercial photogrammetric software have given adequate results for heritage applications concerning the shape reconstruction from images out of visible spectrum, especially for images collected with converted cameras -modified to be sensitive for small areas out of the optical range- which are becoming increasingly popular due to their low cost and high resolution, comparing to spectral and multispectral sensors used usually for the inspection of materials.
Only a very few mentions of the integration of more than one of the interpretational approaches described here can be found in literature concerning heritage science, as in the application of [PERSON] et al. (2011) where a 3D rendering of UT maps and GPR trace envelope is presented (Figure 1). The fusion of more than one three-dimensional interpretation of sensor data from techniques coming from different fields is very important for the complete documentation, recording and assessment of the state of preservation for tangible heritage, as already stated and is therefore a critical direction for future research.
It should be briefly mentioned that the elemental mapping of surfaces through scanning spectroscopic techniques as MAcro X-Ray Fluorescence (MA-XRF), can potentially be interpreted in a three-dimensional way, since the produced data are of imagery types and most of the instrumentation developed provide data for the distances between the mapped surfaces and the movable detecting sensor, which if translated to a data elevation model could be potentially co-registered with the model of the surface to texturize it with the elemental maps. This consists a topic that the authors want to address in future research. MA-XRF elemental mapping has successfully been implemented for the study of paintings ([PERSON] et al. 2017, [PERSON] et al. 2018, [PERSON] 2017) and fresces ([PERSON] et al. 2018).
3D metric survey does not mean just the modelling of visible surfaces but a real 3D model set-up by considering the aim of the survey itself. The integration of the above described inspection techniques, that allow a 3D characterization of the investigated objects, opens new perspective in the 3D model setting-up to correctly represent the volumetric elements which can host the thematic information coming from the different acquired information. The knowledge of the characteristics of the needed 3D model elementary parts could influence the metric data acquisition both in terms of resolution and accuracy, therefore ad-hoc investigations have to be performed to optimize the point cloud acquisition in order to achieve the needed accuracy and resolution in the different part of the investigated object. The possible 3D integration of the thematic information will allow also the comparison of the possible analysis performed to ease the cross interpretation. This is the main topic of the research that the authors would like to point out in the next years by collaborating with specialists in the fields of the different inspection techniques.
## Acknowledgements
The research leading to this paper was conducted in the context of the Tech-Calturite PhD programme at the University of Turin, which receives funding from the European Union's Horizon 2020 Research & Innovation programme under the [PERSON] grant agreement No. 754511 (H2020-MSCA-COFUND) and from the Compania di San Paolo.
## References
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON]. & [PERSON], [PERSON] 2017. Multi-sensor documentation of metric and qualitative information of historic stone structures. In _ISPRS Annals of Photogrammetry, Remote Sensing and Spatial Information Sciences, IV-2/W2_, 1-8.
* [PERSON] et al. (2017) [PERSON] [PERSON], [PERSON] [PERSON], [PERSON] [PERSON] and [PERSON] [PERSON], 2017. 3D Modelling the Invisible using Ground Penetrating Radar. _International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences_, 42, 33-37.
* [PERSON] et al. (2011) [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], [PERSON] [PERSON] and [PERSON] [PERSON], 2011. Mapping infrared data on terrestrial laser scanning 3D models of buildings. _Remote Sensing_, 3(9), 1847-1870.
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON] [PERSON], 2017. CORO: a fast and reconfigurable macro X-ray fluorescence scanner for in-situ investigations of polychrome surfaces. _X-Ray Spectrometry_, 46(5), 297-302.
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] & [PERSON] (2018). Interpreting technical evidence from spectral imaging of paintings by [PERSON] in the Courtauld Gallery. _X-Ray Spectrometry_, 1-11.
* [PERSON] et al. (2010) [PERSON] [PERSON], [PERSON], [PERSON] and [PERSON], 2010. Terrestrial laser scanning intensity data applied to damage detection for historical buildings. _Journal of Archaeological Science_, 37(12), 3037-3047.
* [PERSON] et al. (2014) [PERSON] [PERSON], [PERSON] [PERSON], [PERSON], and [PERSON] [PERSON], 2014. Fusion of 3D models derived from TLS and image-based
Figure 1: Combined 3D rendering of UT and GPR data ([PERSON] et al., 2010)
techniques for CH enhanced documentation, ISPRS Ann. Photogramm. Remote Sens. Spatial Inf. Sci., II-5, 73-80.
* [PERSON] et al. (2018) [PERSON] [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], 2018. Integrated Volume Visualisation of Archaeological Ground Penetrating Radar Data. In _16 th Eurographics Workshop on Graphics and Cultural Heritage (GCH 2018)_, 231-234. The Eurographics Association.
* [PERSON] et al. (2013) [PERSON] [PERSON], [PERSON] [PERSON] and [PERSON], 2013. Thermal 3D mapping of building facades. In _Intelligent Autonomous Systems, 12_, 173-182. Springer.
* [PERSON] and [PERSON] (2018) [PERSON] and [PERSON], 2018. Non-Invasive Moisture Detection for the Preservation of Cultural Heritage. _Heritage, 1(1)_, 163-170.
* [PERSON] et al. (2000) [PERSON], [PERSON] and [PERSON], 2000. Investigation procedures for the diagnosis of historic masonies. _Construction and Building Materials, 14(4)_, 199-233.
* [PERSON] et al. (2013) [PERSON], [PERSON], [PERSON] [PERSON], [PERSON], [PERSON] [PERSON], [PERSON] and [PERSON], 2013. Combined geometric and thermal analysis from UAV platforms for archaeological heritage documentation. _ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences_, 2, 49-54.
* [PERSON] et al. (2006) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], [PERSON] and [PERSON], 2006. A system for 3D modeling frescoed historical buildings with multispectral texture information. _Machine Vision and Applications, 17(6)_, 373-393.
* [PERSON] et al. (2009) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], 2009, October. Integration of 3D laser scanning, photogrammetry and thermography to record architectural monuments. In _Proc. of the 22 nd Int. CIPA Symposium_, 6-11.
* [PERSON] et al. (2012) [PERSON], [PERSON] and [PERSON], 2012. The assessment of ultrasonic tests as a tool for qualification and diagnostic study of traditional highly porous and soft stone materials used in the built heritage of the past. In _EGU General Assembly Conference Abstracts, 14_, 9860.
* LABORATORI NAZIONAI IDEL SUD.
* [PERSON] et al. (2013) [PERSON], [PERSON], [PERSON] and [PERSON], 2013. Registration of 3D and multispectral data for the study of cultural heritage surfaces. _Sensors, 13(1)_, 1004-1020.
* [PERSON] et al. (2015) [PERSON], [PERSON] and [PERSON] [PERSON], 2015. 3D as-is building energy modeling and diagnostics: A review of the state-of-the-art. _Advanced Engineering Informatics_, 29(2), 184-195.
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], [PERSON], [PERSON] and [PERSON] [PERSON], 2015. Combined use of terrestrial laser scanning and IR thermography applied to a historical building. _Sensors, 15_(1), 194-213.
* [PERSON] et al. (2016) [PERSON] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], 2016. Multispectral imaging: Fundamentals, principles and methods of damage assessment in constructions. _Non-Destructive Techniques for the Evaluation of Structures and Infrastructures_, _11_, 139-166. CRC Press/Balkema.
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON] and [PERSON], 2017. An UAS-assisted multi-sensor approach for 3D modeling and reconstruction of cultural heritage site. _Journal of cultural heritage, 26_, 79-90.
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], 2018. An innovative methodology for the non-destructive diagnosis of architectural elements of ancient historical buildings. _Scientific reports_, 8(1), 4334-4344.
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], [PERSON] [PERSON] and [PERSON], 2017. Integrated ultrasonic, laser scanning and petrographical characterisation of carbonate building materials on an architectural structure of a historic building. _Bulletin of Engineering Geology and the Environment_, 76(1), 71-84.
* [PERSON] et al. (2006) [PERSON], [PERSON] and [PERSON], 2006. _Guidelines on the X-radiography of Archaeological Metahork_. English Heritage.
* [PERSON] and [PERSON] (2015) [PERSON] and [PERSON], 2015. Detecting moisture damage in archaeology and cultural heritage sites using the GPR technique: a brief introduction. _International Journal of Archaeology_, 3(1-1), 57-61.
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], [PERSON] and [PERSON] [PERSON], 2017. Effective detection of subsurface archeological features from laser scanning point clouds and imagery data. _International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences, XLII-2/W5_, 245-251.
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] [PERSON] and [PERSON], [PERSON], 2017. Merging geometric documentation with materials characterization and analysis of the history of the Holy Acideulec in the Church of the Holy Sepulchre in Jerusalem. _International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences, XLII-SW1_, 487-494.
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] [PERSON] and [PERSON], 2016. Evaluating Unmanned Aerial Platforms for Cultural Heritage Large Scale Mapping. _Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci., XL-B5_, 355-365.
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] [PERSON] and [PERSON], 2017. Image based recording of three-dimensional profiles of paint layers at different wavelengths. _Eur. J. Sci. Theol., 13_, 127-134.
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] [PERSON] and [PERSON], [PERSON], 2018. Construction and comparison of 3D multi-source multi-band models for cultural heritage applications. _Journal of Cultural Heritage, 34_, 261-267.
* [PERSON] (2013) [PERSON], 2013. Beyond the Naked Eye: Ethnography Screened through the Scientific Lens. _Critical Interventions_, 7(1), 95-104.
* [PERSON] and [PERSON] (2018) [PERSON] and [PERSON], 2018. Mobile thermal mapping for matching of infrared images with 3D building models and 3D point clouds. _Quantitative InfaRed Thermography Journal, 15(2)_, 252-270.
* ICOMOS (1990) ICOMOS, 1990. Charter for the protection and management of the Archaeological Heritage.
* ICOMOS (2000) ICOMOS, 2000. Krakow Charter 2000: Principles for conservation and restoration of built Heritage.
* [PERSON] (2010) [PERSON], 2010. Definition of Buried Archaeological Remains with a New 3D Visualization-in Technique of Ground Penetrating Radar Data Set in Temple Augusts in An-kara, Turkey. _Near Surface Geophysics_, 8(5), 397-406.
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON] and [PERSON] [PERSON], 2017. Integration of Point Clouds and Images Acquired from a Low-Cost NIR Camera Sensor for Cultural Heritage Purposes. _Int. Arch. Photogram. Remote Sens. Spatial Inf. Sci._, _XLII-2W5_, 407-414.
* [PERSON] et al. (2016) [PERSON] [PERSON], [PERSON] [PERSON], [PERSON] and [PERSON], 2016. Thermographic 3D Modeling of Existing Constructions. _Non-Destructive Techniques for the Evaluation of Structures and Infrastructure_, _11_, 233-252. CRC Press/Balkema.
* [PERSON] (2003) [PERSON], 2003. Ground-penetrating radar: a modern three-dimensional prospection method. _Ar Archaeological prospection_, _10(4)_, 213-240.
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], 2016. Moisture detection in heritage buildings by 3D laser scanning. _Studies in Conservation_, _61_(sup1), 46-54.
* [PERSON] (2017) [PERSON], 2017. Seismic and Sonic Applications on Artifacts and Historical Buildings. In _Sensing the Past_, 153-173. Springer.
* [PERSON] et al. (2011) [PERSON] [PERSON], [PERSON], [PERSON] and [PERSON], 2011. GPR and sonic tomography for structural restoration: the case of the cathedral of Tricarico. _Journal of Geophysics and Engineering_, 8(3), S76-S92.
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON] and [PERSON], 2018. Thermal texture selection and correction for building facade inspection based on thermal radiant characteristics. _International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences_, _42_(2), 585-591.
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON] 2017. A new digital radiography system for paintings on canvas and on wooden panels of large dimensions. _In the Proceedings of 2017 IEEE I2 MTC Conference_.
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON] [PERSON], 2015. Combined Neutron and X-ray imaging for non-invasive investigations of cultural heritage objects. _Physics Procedia_, _69_, 653-660.
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON] [PERSON], 2016. Geophysics: Fundamentals and Applications in Structures and Infrastructure. _Non-Destructive Techniques for the Evaluation of Structures and Infrastructure_, _11_, 105-134. CRC Press/Balkema.
* [PERSON] and [PERSON] (2014) [PERSON] and [PERSON], 2014. Main geophysical techniques used for non-destructive evaluation in cultural built heritage: a review. _Journal of Geophysics and Engineering_, _11_(5), 053001-053015.
* [PERSON] et al. (2017) [PERSON], [PERSON], and [PERSON], 2017. Simulation of a Portuguese limestone masonry structure submitted to fire: 3D Ultrasonic Tomography approach. _International Journal of Conservation Science_ 8(4), 565-580.
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON] and [PERSON], 2018. Application of Multisensory Technology for Resolution of Problems in the Field of Research and Preservation of Cultural Heritage. _Advances in Digital Cultural Heritage_, 32-47. [PERSON]
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], [PERSON] and [PERSON] [PERSON], 2018. Non-destructive characterization of ancient clay brick walls by indirect ultrasonic measurements. _Journal of Building Engineering_, _19_, 172-180.
* [PERSON] et al. (2010) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], [PERSON] [PERSON] and [PERSON], [PERSON], 2010. Application of X-ray computed tomography to cultural heritage diagnostics. _Applied Physics A_, _100_(3), 653-661.
* [PERSON] and [PERSON] (2015) [PERSON] and [PERSON] [PERSON], 2015. Non-destructive testing for assessing structural damage and interventions effectiveness for built cultural heritage protection. In _Handbook of Research on Seismic Assessment and Rehabilitation of Historic Structures_, 448-449. IGI Global.
* [PERSON] et al. (2005a) [PERSON] [PERSON], [PERSON], [PERSON], and [PERSON], 2005a. Integrated diagnostics using advanced in situ measuring technology. _In Proceedings of the 10 th international conference on durability of building materials and components_, Lyon, 1116-1123.
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON] [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], 2018. Mapping VIS and UVL Imagery on 3D Geometry for Non-Invasive, Non-Contact Analysis of a Vase. _Int. Arch. Photogram. Remote Sens. Spatial Inf. Sci._, _XLII-2_, 773-780.
* [PERSON] et al. (2011) [PERSON] [PERSON], [PERSON] [PERSON], [PERSON], [PERSON] and [PERSON], 2011, September. Full wave-field recording: STREAM X at Empuries (Girona, Spain). _In 9 th International conference on archaeological prospection_, 213-217.
* [PERSON] et al. (2002) [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], 2002. Application of 3D visualization techniques in the analysis of GPR data for archaeology. _Annals of Geophysics_, _45_(2), 321-337.
* [PERSON] and [PERSON] (2007) [PERSON] and [PERSON], 2007. _X-radiography of textiles_, _dress and related objects_. Routledge.
* [PERSON] et al. (2012) [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], 2012, October. Ultrasonic pulse velocity: A tool for the condition assessment of outdoor marble sculptures. In _Proceedings of 12 th International Congress on Deterioration and Conservation of Stone_. 22-26.
* [PERSON] et al. (2014) [PERSON] [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], [PERSON], [PERSON] and [PERSON], 2014. Advanced imaging systems for diagnostic investigations applied to Cultural Heritage. In _Journal of Physics: The Conference Series_, _566_(1), 012022. IOP Publishing.
* [PERSON] et al. (2009) [PERSON], [PERSON] [PERSON], [PERSON] [PERSON] and [PERSON] [PERSON], 2009, August. Automated multispectral texture mapping of 3D models. In _Proc. of the 17 th European Signal Processing Conference (EUSIPCO 2009)_, 1215-1219. IEEE.
* [PERSON] et al. (2009) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], 2009. Integrated near-surface geophysical survey of the Cathedral of Mallorca. _Journal of Archaeological Science_, _36_(7), 1289-1299.
* [PERSON] et al. (2013) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], [PERSON], 2013. Non-destructive analysis in cultural heritage buildings: Evaluating the Mallorca cathedral supporting structures. _NDT & E International_, 59, 40-47.
* [PERSON] et al. (2008) [PERSON] [PERSON], [PERSON], [PERSON] [PERSON] and [PERSON] [PERSON], 2008. 3D enhanced model from multiple data sources for the analysis of the Cylinder seal of Ibni-Sharrum. In _VAST 2008: The 9 th International Symposium on Virtual Reality, Archaeology, and Cultural Heritage_, 79-84. Eurographics Association.
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], [PERSON], 2014. X-ray tomography of large wooden artworks: the case study of \"Doprio corpo\" by [PERSON]. _Heritage Science_, 2: 19.
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], [PERSON], 2015. X-ray tomography of a soil block: a useful tool for the restoration of archeological finds. _Heritage Science_, 3: 4.
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], [PERSON], 2016. The importance of tomography studying wooden artefacts: a comparison with radiography in the case of a coffin lid from Ancient Egypt. _Internation Journal of Conservation Science_ 7(S2), 939-944.
* [PERSON] et al. (2009) [PERSON], [PERSON], [PERSON] [PERSON], [PERSON] [PERSON] and [PERSON] [PERSON], 2009, July. Novel data registration techniques for art diagnostics and 3d heritage visualization. In _Proc. of the 9 th Conference on Optical 3D Measurement Techniques_.
* A Review. _The Int. Arch. Photogram. Remote Sens. Spatial Inf. Sci., XL-5/WT_, 359-363.
* [PERSON] et al. (2007) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], [PERSON], 2007. Digital preservation, documentation and analysis of paintings, monuments and large cultural heritage with infrared technology, digital cameras and range sensors. _International Archives of the Photogramy_, _Remote Sensing and Spatial Information Sciences_, 36(Part5/C53), 631-636.
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], 2016. Automatic morphologic analysis of quasi-periodic masonry walls from LiDAR. _Computer-Aided Civil and Infrastructure Engineering_, _31(4)_, 305-319.
* [PERSON] et al. (2011) [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], 2011. Comparison between GPR measurements and ultrasonic tomography with different inversion algorithms: an application to the base of an ancient Egyptian sculpture. _Journal of Geophysics and Engineering_, _8(3)_, S106-S116.
* [PERSON] et al. (2018) [PERSON] [PERSON], [PERSON] [PERSON], [PERSON], [PERSON] and [PERSON], 2018. Heritage site preservation with combined radiometric and geometric analysis of TLS data. _Automation in Construction_, _85_, 24-39.
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON] [PERSON], [PERSON], [PERSON] and [PERSON] [PERSON], 2014. Assessment of complex masonry structures with GPR compared to other non-destructive testing studies. _Remote Sensing_, _6(9)_, 8220-8237.
* [PERSON] et al. (2004) [PERSON], [PERSON], [PERSON] and [PERSON] [PERSON], 2004. X-rays in art and archaeology: an overview. _Powder Diffraction_, _19(1)_, 3-11.
* [PERSON] et al. (2011) [PERSON] [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], [PERSON] and [PERSON], 2011. 3D models for cultural heritage: beyond plain visualization. _Computer_, 7, 48-55.
* [PERSON] et al. (2018) [PERSON] [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] [PERSON] (2018). [PERSON]'s workshop at Villa Farnesin in Rome: the frescoed vault of [PERSON] and [PERSON] investigated by macro-X-ray fluorescence scanning. _Rendiconti Lincei. Science Fisiche e Naturali_, _29(3)_, 499-510.
* [PERSON] et al. (2012) [PERSON], [PERSON], [PERSON] and [PERSON], 2012. Asserting the precise position of 3D and multispectral acquisition systems for multisensor registration applied to cultural heritage analysis. In _International Conference on Multimedia Modeling_, 597-608. Springer.
* [PERSON] et al. (2018) [PERSON], [PERSON] and [PERSON], 2018. Thermal IR imaging: Image quality and orthophoto generation. _International Archives of the Photogramy_, _Remote Sensing and Spatial Information Sciences-ISPRS Archives_ _42(1)_, 413-420.
* [PERSON] et al. (2016) [PERSON], [PERSON] and [PERSON], 2016. Ground Penetrating Radar: Fundamentals, Methodologies and Applications in Structures and Infrastructure. In _Non-Destructive Techniques for the Evaluation of Structures and Infrastructure_, _11_, 135-158. CRC Press/Balkema.
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], 2017. Documenting bronze age Akroiti on Theta using laser scanning, image-based modelling and geophysical prospection. _The Int. Archives of Photogramy_, _Remote Sensing and Spatial Information Sciences_, _XLII-2/W3_, 631-638.
* [PERSON] (2018) [PERSON], 2018. _The encyclopedia of Archaeological Sciences_. John Wiley & Sons.
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], 2018, May. Spectral and 3D cultural heritage documentation using a modified camera. _The Int. Arch. Photogram. Remote Sens. Spatial Inf. Sci., XLII-2_, 1183-1190.
* [PERSON] et al. (2009) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], 2009. New temple discovery at the archaeological site of Nysa (western Turkey) using GPR method. _Journal of Archaeological Science_, _36(8)_, 1680-1689.
* [PERSON] et al. (2012) [PERSON] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], 2012. Process for the 3D virtual reconstruction of a microcultural heritage artifact obtained by synchrotron radiation CT technology using open source and free software. _Journal of Cultural Heritage_, _13(2)_, 221-225.
* [PERSON] et al. (2013) [PERSON], [PERSON], [PERSON] and [PERSON], 2013. Ground penetrating radar (GPR) attribute analysis for archaeological prospection. _Journal of Applied Geophysics_, _97_, 107-117.
* [PERSON] and [PERSON] (2018) [PERSON] and [PERSON] [PERSON], 2018. Non-Destructive Assessment of Masnoy Pillars using Ultrasonic Tomography. _Materials_, _11_(12), p.2543-2558.
|
isprs
|
3D INTERPRETATION AND FUSION OF MULTIDISCIPLINARY DATA FOR HERITAGE SCIENCE: A REVIEW
|
E. Adamopoulos, F. Rinaudo
|
https://doi.org/10.5194/isprs-archives-xlii-2-w15-17-2019
| 2,019
|
CC-BY
|
isprs/a5376738_41f0_4fdc_8e8e_b1ae2f03f477.md
|
A Methodological Proposal for Improvement of Digital Surface Models Generated by Automatic Stereo Matching of Convergent Image Networks
[PERSON]
Corresponding author.
[PERSON]
[PERSON]
Expresion Grafica Arquitetconica, Universidad Politecnica de Cartagena, 30202 Cartagena, Spain - (josefina.leon, juanjo.martinez)@upct.es
Ingenieria Cartografica, Geodesia y Fotogrametria, Escuela Politecnica, Universidad de Extremadura, 10071 Caceres, Spain - [EMAIL_ADDRESS]
Session code PS, WG V/1
###### Abstract
A Digital Surface Model (DSM) generated by automatic stereo matching of convergent image networks includes a great number of 3D-points that come from diverse combinations of stereo-pairs. This points can have a very different accuracy and reliability, in addition to a great and nondesirable spatial redundancy.
We analized different methods to generate several synthesis DSM whose points are the result of a statistic process. The aim is to obtain a regular mesh of points whose Z coordinates are estimated from the whole set of data over the interpolating function. We choose the best of them using a random mesh of points whose coordinates were measuredby multiple direct intersection.
Cultural Heritage, Architecture, Statistics, Reliability, Method.
## 1 Introduction
The purpose of photogrametric methods basing on automatic correlation is to generate massive clouds of points (figure 3). [PERSON] et al. 2003 is an example of the use of a convergent geometry (figure 2), from which a great deal of information has been drawn. This paper also demonstrates that these data can undergo selection regarding the value of the correlation coefficient. Selection eliminates bad quality data and simplifies matters, in spite of which, the amount of resulting points is still huge. In this paper, a method to filtrate these points is introduced, in order to achieve a model of synthesis surface without appreciable loss in accuracy.
Several methods have been used; for example, Kriging, broadly used in other disciplines and based upon sound statistical principles, since it ensures an optimal interpolating function. The aim is to obtain a regular mesh of points whose Z coordinates are estimated from the whole set of data over the interpolating function.
## 2 Data
This work was carried out on one of the churches of the historical centre: San Mateo Church, sited in Caceres (Western Spain, city declared World Heritage by UNESCO in 1986). The Southern facade has a complex design due to the presence of one tower and some relieves that frame the main gate. All these elements make this church ideal for our study (Figure 1).
Figure 1: Southern facade of San Mateo Church.
## 3 Methods
Different estimates have been given, all of them validated afterwards, letting us know the mean square error and compare models' accuracy.
### Inverse Distance Weighing (IDW)
The z coordinates of the point to interpolate are estimated allocating weights to environment data in inverse relation to distance; nearest points thus getting more weight in calculations. It is an exact method that estimates the value of the variable for a point not belonging to the sample, using the following expression (1):
\[z_{j}=\frac{\sum_{i=1}^{n}\frac{1}{d_{\bar{y}}^{\,\rho}}\cdot z(x_{i})}{\sum_{i =1}^{n}\frac{1}{d_{\bar{y}}^{\,\rho}}} \tag{1}\]
where \(\mathrm{d_{\bar{y}}}\) is the Euclidean distance between each data item and the point to interpolate, and p is the weighting exponent. The least mean square error of the prediction (RMSPE) is calculated in order to determine the optimal exponent value. The optimal power (p) value is determined by minimizing the root mean square prediction error (RMSPE). The RMSPE is the statistic that is calculated from cross-validation. In cross-validation, each measured point is removed and compared to the predicted value for that location. The RMSPE is a summary statistic quantifying the error of the prediction surface ([PERSON] et.alt.
### Radial basis function (RBF)
Radial base functions comprise a wide group of exact and local interpolators that use an equation with its base dependent on distance. Generally speaking, the value of the variable is given by the following expression (2):
\[z_{j}=\sum_{i=1}^{n}a_{i}\cdot F(d_{\bar{y}}) \tag{2}\]
where F(dij) is the radial base function, with d being the distance between points; \(\mathrm{a_{i}}\), the coefficients that will be calculated solving a linear system of n equations, and n, the number of neighbouring sample points involved in obtaining \(\mathrm{z_{j}}\).
In this case, we will use a radial base multiquadratic-type function (3)([PERSON] et. al, 2001), which comprises an r parameter: the softening factor. This value should be previously tested according to the data in each case; a very high value will generate a very softened surface, far from the real surface.
\[F(d_{\bar{y}})=\sqrt{d_{\bar{y}}^{\,2}+r^{\,2}} \tag{3}\]
Figure 4: Optimal exponent determination graph.
Figure 3: Correlated clouds of points.
Figure 2: Design of the convergent geometry
### Kriging
It is an exact and local interpolation method ( [PERSON], 2003) that sets the weight of each sample point according to the distance between the point to interpolate and the sample points.
[PERSON]'s procedure estimates this dependence over the semivariance, which takes different values according to the distance between data items. The function that relates semivariance to distance is called semivariogram and shows the variation in correlation among the data, according to distance. The basic expression is (4):
\[\gamma(h)=\frac{1}{2n}\sum_{i=1}^{n}\left(z_{i}-z_{i+k}\right)^{2} \tag{4}\]
where n is the number of value pairs separated by a distance h.
Theory demands the semivariogram to be of general validity for the whole digital model's area. This means that data interdependence should be the exclusive function of the distance among them, and not of its absolute space location, because of which it doesn't allow for the treatment of discontinuities that lead to abrupt changes, such as slope ruptures.
## 4 Results
To evaluate the effectiveness of the interpolation methods, the data were validated through a random mesh of points (figure 5), whose coordinates were measured using multiple direct intersection by classic topography.
For the statistical evaluation of the effectiveness of each method, the model we obtained was checked with the real model, using the root mean square error (RMS) over the 72 validation points, which is defined by the following expresion(5) ([PERSON], 1994):
\[RMS=\sqrt{\frac{\sum_{i=1}^{n}\left(z_{i}^{estimated}-z_{i}^{real}\right)^{2}} {n}} \tag{5}\]
There are different search types among which we must choose to select those neighbouring sample points that will take part in the numeric determination of the \"non-sample\" point. This can be performed taking quadrants, octantes or the whole circular sector into account. So as to research the influence we performed the IDW interpolation using the three types of neighbour selection for 30 neighbours of which at least 12 within a search circumference of a 10 cm radius, obtaining the following final RMS (table 7):
### Optimal number of neighbouring points
The number of neighbouring points that take part in the interpolation was calculated evaluating the RMS, using IDW as a method.The method without quadrants has been selected for being the one with best results, as seen in the previous section.
According to the table 8, there is a certain threshold above which the interpolated model's precision doesn't improve, no matter how high the number of points considered. For this reason, the options of 30 neighbours with at least 12 within and that of 15 neighbours with at least 10 are considered valid, since having more points without improvement in precision only increases the volume and time of calculations.
\begin{table}
\begin{tabular}{|c|c|} \hline Selection & RMS (m) \\ \hline all & 0.025 \\ \hline quadrants & 0.028 \\ \hline octantes & 0.030 \\ \hline \end{tabular}
\end{table}
Table 7: Model’s error according to different search types by sectors.
Figure 5: Distribution of the validation points.
\begin{table}
\begin{tabular}{|c|c|c|} \hline Number of & Minimum number of & RMS (m) \\ neighbours & neighbours & RMS (m) \\ \hline
45 & 15 & 0,026 \\ \hline
30 & 12 & 0.025 \\ \hline
15 & 10 & 0.026 \\ \hline
8 & 8 & 0.032 \\ \hline
6 & 4 & 0.033 \\ \hline \end{tabular}
\end{table}
Table 8: RMS according to the number of neighbours selected
Figure 6: Result of points’ validation
### Determination of search radius
Tests were carried out choosing neighbours within a circular area of variable radius, keeping the number of neighbouring points fixed at 30 with at least 12 within, using selection without quadrant, over the whole sector (table 9).
In the light of results, we can affirm that the method most homogeneous to the selection circumference is IDW, knowing that varying the radius, the model's mean square error remains constant.
It can be due to the fact that all points that take part in the interpolation have already been selected within a 5-cm radius; thus an increase in radius doesn't alter the interpolation at all. Hence the RMS varies when the radius decreases to 1 cm.
### Choice of optimum exponent
The use of the IDW method implies choosing the optimum exponent of the weighting functions. In our case -having analysed all the data- the exponent with minimum RMS
## 5 Conclusions
Results have shown that these methods can be used to accomplish an effective filtering of massive clouds of points generated by means of automatic photogrametry. The values of accuracy are reasonable for the model studied, although different methods yield different results. It is remarkable that the simplest method (IDW) is also the one that reaches smaller RMSE values of just 2.5 cm. There is therefore no need for more sophisticated methods, which have almost duplicated the previous RMSE in the cases analysed. Likewise, the IDW method is more robust in the face of changes in selection area and in the number of points used in the interpolation.
Los resultados han mostrado que los metodos utilizados son atiles para realizar un filtrado eficaz de las nubes masivas de puntos que se genen nendematie fologramtria automatica. Los valores de exacation son razonables para el modele estudio annuque los metodos da resultado diferentes. Es interesante que el metodos mas simple (IDW) es tambien en que alcanza menores valores de RMSE, de apenas 2.5 cm. No es necesario, por tanto, acudir a metodos mas sofisticados, que en los casos analizados casi han duplicado el RMSE anterior. Asimismo, el metodo IDW es mas robusto ante cambios en el area de seleccion y el numero de puntos usados en la interpolacion.
## References
* [PERSON] et al. (2001) [PERSON], [PERSON], [PERSON],[PERSON], [PERSON] 2001. Evaluacion de diferentes tecnica de interpolacion espacial para la generacion de modelos digitales del terreno agricola. Mapping interactivo.
* [PERSON] (1994) [PERSON] 1994. Parametric statistical method for error deteccon in digital elevation models. ISPRS Journal of Photogrammetry and Remote Sensing, 49(4) pp. 29-33.
* [PERSON] et al. (2003) [PERSON], [PERSON], [PERSON] [PERSON] 2003. First experiments with convergent multi-images photogrammetry with automatic correlation applied to differential rectification of architectural facades. International Archives of XIX CIPA Symposium, Antalya pp.196-201.
* [PERSON] et al. (2001) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON]., Using ArcGIS Geostatistical Analyst. Esri
* [PERSON] (2003) [PERSON], [PERSON] 2003. La representacion grafica de las variables regionalizadas. Geostadistica Lineal. Caceres.
* [PERSON] (2004) [PERSON], [PERSON] 2004. Aplicaciones de la geoestadistica en las ciencias ambientales. Ecosistemas. Revista cientifica y tecnica de ecologia y medio ambiente. Ano XIII, N*1.(URL: [[http://www.aet.org/ecosystemeas/041/revision3.htm](http://www.aet.org/ecosystemeas/041/revision3.htm)]([http://www.aet.org/ecosystemeas/041/revision3.htm](http://www.aet.org/ecosystemeas/041/revision3.htm)))
* [PERSON] and [PERSON] (1990) [PERSON] [PERSON], [PERSON] [PERSON] 1990 Geostadistica. Aplicaciones a la hidrologia subterranean. Barcelona.
Figure 13: Kriging model
|
isprs
|
Countrywide Stereo-Image Matching for Updating Digital Surface Models in the Framework of the Swiss National Forest Inventory
|
Christian Ginzler, Martina Hobi
|
https://doi.org/10.3390/rs70404343
| 2,015
|
CC-BY
|
isprs/afed328f_0a18_4e37_9719_87ccfb50f945.md
|
# Automated Mosaicking of Multiple 3D Point Clouds Generated from a Depth Camera
[PERSON]. [PERSON]. [PERSON]
Dept. of Geoinformatic Engineering, Inha University, 100 Inharo, Namgu, Incheon, Korea (khanai91, rainnydayz)@inha.edu, [EMAIL_ADDRESS]
###### Abstract
In this paper, we propose a method for automated mosaicking of multiple 3D point clouds generated from a depth camera. A depth camera generates depth data by using ToF (Time of Flight) method and intensity data by using intensity of returned signal. The depth camera used in this paper was a SR4000 from MESA Imaging. This camera generates a depth map and intensity map of 176 x 44 pixels. Generated depth map saves physical depth data with mm of precision. Generated intensity map contains texture data with many noises. We used texture maps for extracting tippoints and depth maps for assigning \(x\) coordinates to tippoints and point cloud mosaicking. There are four steps in the proposed mosaicking method. In the first step, we acquired multiple 3D point clouds by rotating depth camera and capturing data per rotation. In the second step, we estimated 3D-3D transformation relationships between subsequent point clouds. For this, 2D tippoints were extracted automatically from the corresponding two intensity maps. They were converted into 3D tippoints using depth maps. We used a 3D similarity transformation model for estimating the 3D-3D transformation relationships. In the third step, we converted local 3D-3D transformations into a global transformation for all point clouds with respect to a reference one. In the last step, the extent of single depth map mosaic was calculated and depth values per mosaic pixel were determined by a ray tracing method. For experiments, 8 depth maps and intensity maps were used. After the four steps, an output mosaicked depth map of 454x144 was generated. It is expected that the proposed method would be useful for developing an effective 3D indoor mapping method in future.
Depth Camera, Point Cloud, Mosaicking, Tiepoint, Similarity transformation,
## 1 Introduction
In recent day, the technology of handling 3D point clouds is gaining industrial and public attentions. Handling 3D point clouds is an important task in 3D indoor mapping, robotics as well as geometries. There are several methods to generate 3D point clouds. In photogrammetric engineering, matching of stereo images or laser scanning has been used for generating 3D point clouds. However, for indoor environments, these methods may not be easily applied. For matching, indoor images may not have sufficient texture. For laser scanning, there is a problem of cost and scanning speed.
As an alternative, we selected to use a depth camera as a method for 3D point cloud generation. In this approach, we can have an advantage that we do not need to hire sophisticated matching process for depth calculation.
A depth camera usually has small field of view and small number of pixels. To use a depth camera for indoor scanning, it is necessary to combine individual depth scenes into a complete 3D point cloud. As a first attempt for such combination, we tackle the problem of how to mosaic individual 3D point clouds. Mosaicking of individual 2D images has been studied previously ([PERSON] and [PERSON], 1998). In this paper, we propose a new method of automated mosaicking of multiple 3D point clouds generated from a depth camera.
## 2 Characteristic of Depth Camera
### Principle of Estimating Distance Using Depth Camera
Because our method uses a depth camera, we first introduce the principle of distance measurement using a depth camera. A depth camera emits infrared rays, which will be reflected by the objects. A depth camera detects the reflected rays and measures the phase difference. From these steps, we can calculate the distance between the camera and the objects. The phase difference in wavelengths is calculated from the relationship among different 4 electron charges and phase control signals have 90\({}^{\circ}\) of phase lag to each other. The phase difference \(t_{d}\) from those electron charges could be calculated through Eq. (1).
\[t_{d}=\text{tan}^{-1}\frac{\Omega_{3}-\Omega_{4}}{\Omega_{1}- \Omega_{2}} \tag{1}\] \[\text{where}\quad\quad\Omega_{1},\Omega_{2},\Omega_{3},\Omega_{4 }=\text{control signal}\]
We can calculate the distance \(d\) through Eq. 2 when speed of light(\(c\)) and frequency are given. And \(c/2f\) is maximum distance that depth camera can estimate without ambiguity.
\[d=\frac{c}{2f}\frac{t_{d}}{2\pi} \tag{2}\] \[\text{where}\quad c=\text{speed of light}\] \[f=\text{focal length}\] \[t_{d}=\text{difference of phase}\]
### Depth Camera's Error
Because of the depth camera design, an image generated from a depth camera has systematic errors and non-systematic errors as shown in figure 1. Systematic error depends on sensor's characteristics such as intensity of emitted IR signal, data collecting time and temperature. Non-systematic error depends on the object characteristics such as color, surface and quality of the object.
### Equipment
In this paper, we choose a MESA SR4000 for a depth camera. It generates depth map and intensity map of 176 x 144 pixels that has a precision that 16 bit floating points. This depth camera can estimate distance upto 10 meter and the detail information are in Table 1.
## 3 Depth Map Mosaicking
There is an important difference between mosaicking depth camera's output and mosaicking of general stereo camera's output. Figure 2 shows process of each method. In case of stereo camera, relative orientation between stereo images is estimated and stereo matching is applied to generate 3D point clouds. For mosaicking depth map images, these processes can be skipped since the camera generates a depth map as well as an intensity map. Therefore the whole processing chain can be simplified. On the other hands, an intensity map available from a depth camera has very poor quality compared to the image from general stereo camera. Therefore automated tiepoint extraction from a depth camera could be challenging.
For mosaicking, we need to estimate geometric relationship between two dataset. For 2D-2D image mosaicking, we can use affine, homography or coplanar models for the geometric relationship. In our case, we aim to mosaic 3D point clouds. We need to estimate 3D geometric relation between the two dataset. We will use 3D similarity transformation for the relation. This process may also be challenging in that we need to convert 2D tiepoints into 3D by attaching depth value for the tiepoint. Depth errors will be introduced to 3D tiepoints. Precise estimation of 3D-3D transformation with noisy 3D tiepoints is also challenging.
We propose the following five steps to mosaic depth maps.
### Taking Image
We take multiple images from a depth camera by installing the camera on a tripod and by rotating the camera. We set a depth camera on a tripod to eliminate possible height variation among perspective centers of each scene. After finishing setting, we rotate camera clockwise or counterclockwise.
A depth map and an intensity map are generated from the source data after removing some system noise and correcting for lens distortion.
### Extracting Tiepoints
As mentioned, we have to know relationships between adjacent maps for mosaicking. To estimate the relationships, we first apply itepoint extraction. It is very important that there is enough texture information to extracting tiepoints.
We extracted tiepoints between adjacent images by using FAST for detecting keypoints and SIFT as a descriptor and matching. FAST algorithm decides features by considering a circle of sixteen pixels around interested points ([PERSON] et al., 2006). SIFT algorithm selects features like corner point and extracts characteristic vectors from local patch around features ([PERSON] and [PERSON], 2004)
### Generating 3D Coordinate
Extracted tiepoints are expressed as 2D image coordinates (C, R). But the transformation relationship between depth maps is
\begin{table}
\begin{tabular}{|c|l|l|} \hline \multicolumn{1}{|c|}{**Parameter**} & \multicolumn{1}{c|}{**value**} \\ \hline \multicolumn{1}{|c|}{Full key Set} & \multicolumn{1}{c|}{[3D]\(\times\) 144(b)} \\ \multicolumn{1}{|c|}{Field of view} & \multicolumn{1}{c|}{[4D]\(\times\) 144(b)} \\ \hline \multicolumn{1}{|c|}{Openage Image} & \multicolumn{1}{c|}{[3D]\(\times\) 144(b)} \\ \hline \multicolumn{1}{|c|}{Dimension} & \multicolumn{1}{c|}{[3D]\(\times\) 144(b)} \\ \hline \multicolumn{1}{|c|}{Dimension} & \multicolumn{1}{c|}{[3D]\(\times\) 144(b)} \\ \hline \multicolumn{1}{|c|}{Dimension} & \multicolumn{1}{c|}{[3D]\(\times\) 144(b)} \\ \hline \multicolumn{1}{|c|}{Dimension} & \multicolumn{1}{c|}{[3D]\(\times\) 144(b)} \\ \hline \multicolumn{1}{|c|}{Dimension} & \multicolumn{1}{c|}{[3D]\(\times\) 144(b)} \\ \hline \multicolumn{1}{|c|}{Dimension} & \multicolumn{1}{c|}{[3D]\(\times\) 144(b)} \\ \hline \multicolumn{1}{|c|}{Dimension} & \multicolumn{1}{c|}{[3D]\(\times\) 144(b)} \\ \hline \multicolumn{1}{|c|}{Dimension} & \multicolumn{1}{c|}{[3D]\(\times\) 144(b)} \\ \hline \multicolumn{1}{|c|}{Dimension} & \multicolumn{1}{c|}{[3D]\(\times\) 144(b)} \\ \hline \multicolumn{1}{|c|}{Dimension} & \multicolumn{1}{c|}{[3D]\(\times\) 144(b)} \\ \hline \multicolumn{1}{|c|}{Dimension} & \multicolumn{1}{c|}{[3D]\(\times\) 144(b)} \\ \hline \end{tabular}
\end{table}
Table 1: MESA SR4000 Specification
Figure 1: Errors of depth camera image
Figure 2: Process of mosaicking stereo images and depth maps.
3D-3D transformation. It means that we have to convert the tiepoint's coordinates into a 3D coordinates. We used a depth value of the tiepoint pixel and use Eq. 3 for converting 2D image coordinate (C, R) to 3D coordinates (X, Y, Z) in camera frame.
\[\begin{split}\text{X}&=\frac{(C-C_{0})\times Depth}{F} \\ \text{Y}&=-\frac{(R-R_{0})\times Depth}{F}\\ \text{Z}&=-Depth\end{split} \tag{3}\]
where \(C_{0},R_{0}=\text{image coordinates of principal point}\)
F = focal length / ccd size
### Estimating Transformation Coefficients
Converted 3D coordinates per each depth map are in individual camera frames. We have to unify every depth map's coordinate frames for mosaicking depth maps. As mentioned above, we used 3D similarity transformation for this step ( Eq. 4).
\[\begin{bmatrix}X_{i-1}\\ Y_{i-1}\\ Z_{i-1}\end{bmatrix}=S_{i-1}R_{i-1}(\kappa,\varphi,\omega)\begin{bmatrix}X_{i} \\ Y_{i}\\ Z_{i}\end{bmatrix}+\begin{bmatrix}b_{x}\\ b_{y}\\ b_{z}\end{bmatrix} \tag{4}\]
where \(X_{i},Y_{i},Z_{i}\) = image coordinates of principal point
In Eq. 4, we estimate scale, \(\overline{B}\left(\text{b}_{x}\text{b}_{y},\text{b}_{z}\right),(\kappa, \varphi,\omega)\) through least square estimation. To handle outliers due to noisy tiepoints and noisy depth of tiepoints, a robust estimation based on RANSAC algorithm was applied. This step estimates transformation coefficients between adjacent maps.
### Making Depth map Mosaic
After all depth map model's transformation coefficients are estimated, we have to make a depth map mosaic. We may set the area of mosaic image as expanding first captured map. Then, we apply 3D transformation equation to each map's corner point. If the image's order is more than two, we apply 3D transformation repeatedly. Through this, we calculate each map's corner coordinates with respect to first captured map. Among these coordinates, we calculate the area of the whole mosaic image by using each image's corner coordinates. Figure 3 shows this step.
For making depth map mosaic, we input each depth values to the relevant pixels. For the first depth map area, we input their original depth values. For second and other depth map areas, we have to apply some intelligent ray-tracing scheme to handle the problem associate with resampling of depth map in 3D space. Figure 4 shows this step. As a result of this step, mosaic image is generated.
## 4 Experiments and Analysis
### Depth Map Mosaic
In this paper, we conducted experiments to prove validity of the proposed method. Our depth camera, MESA SR4000, was installed on a tripod above a flat indoor floor. We tested the method by mosaicking 8 depth maps.
First, we took depth maps by rotating the depth camera clockwise roughly by 10 degrees. The angular difference between the first and the eighth images would be 80 degrees. Through this step, we acquired 8 depth maps and intensity maps of 176 x 144 pixels. Figure 5 shows these maps.
Among 8 depth maps, we constituted 7 depth map pairs. For each depth map pair, we conducted tiepont extraction, 3D similarity transformation estimation. In making depth map mosaic, we calculated the area of mosaic and resampled each map. Figure 6 shows output of this process. We can visually see that depth map mosaicking was successfully applied.
### Verifying Accuracy
For verifying transformation accuracy, we calculated RMSE of the difference between 3D coordinates of tiepoints in the reference depth map and 3D coordinates of corresponding tiepoints in the pair depth map after applying 3D similarity transformation. Table 3 shows results. We can observe that errors in depth direction were larger that errors in X and Y direction. This is due to the fact that we used noisy depth values over noisy tiepoints. The z coordinates calculated from our method would combine errors in depth values and errors in 2D tipoint coordinates.
Figure 4: Depth map ray tracing
Figure 3: Deciding area of mosaic image
It is also notable that errors were increased as matching pairs were moving further from the reference image (image number 1). Due to the accumulated effects of noisy tiepoints, we observed this error propagation.
### Processing Time
We estimated mosaicking time to test the feasibility of real-time processing. Table 4 shows processing time for extracting tiepoints and estimating 3D transformation coefficient.
It took roughly 1 second for extracting 3D tiepoints from a depth map pair. Time for estimating 3D transformation was from 0.5 second to 1.2 second. For generating a depth map mosaic, it spent 7.9 msec to setting area of mosaic image and 408.6 msec to making depth map. On average, it took roughly 2 seconds for making one depth map mosaic. This processing time was measured on a CPU Core2 Quad Q9550, clock speed 4 GHz, RAM 8 GB and graphic card geForce 9800 GT. For real-time processing, algorithm optimization is required.
## 5 Conclusion
This paper proposes a method of automated mosaicking of multiple 3D point clouds generated from a depth camera. Through the experiments, it is proved that it can be applied for automated mosaicking of multiple 3D point clouds. Our method showed error propagation problem and was somewhat slow for real-time processing. So, we will study further for handling error propagation problem and for reducing processing time.
## References
* [PERSON] and [PERSON] (1998) [PERSON], [PERSON], 1998. Mosaicking of Orthorectified Aerial Images. In: Photogrammetric Engineering and Remote Sensing, Vol 64, pp. 115-125.
* [PERSON] and [PERSON] (2006) [PERSON], [PERSON], 2006. Machine Learning for HighSpeed Corner Detection. Computer Vision. In: _ECCV 2006_, pp. 430-443.
* [PERSON] and [PERSON] (2004) [PERSON], [PERSON], 2004. Distinctive Image Features from Scale-Invariant Keypoints. In: _International journal of computer vision_, Vol.60, Number 2, pp. 91-110.
Figure 5: Source Images
Figure 6: Result images of mosaicking
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Matching & RMSE of & RMSE of Y & RMSE of Z \\ pair & X(mm) & (mm) & (mm) \\ \hline
1 \& 2 & 12.5183 & 5.8769 & 19.9858 \\ \hline
2 \& 3 & 12.3145 & 7.8861 & 19.9046 \\ \hline
3 \& 4 & 16.3924 & 4.4648 & 23.7226 \\ \hline
4 \& 5 & 9.1252 & 9.7904 & 23.5761 \\ \hline
5 \& 6 & 6.2667 & 9.3772 & 23.7884 \\ \hline
6 \& 7 & 11.3569 & 8.1623 & 26.2264 \\ \hline
7 \& 8 & 11.4030 & 7.6917 & 24.2967 \\ \hline Average & 11.3396 & 7.6070 & 23.0715 \\ \hline \end{tabular}
\end{table}
Table 2: RMSE of matching pair’s tiepoint coordinates (X,Y,Z)
|
isprs
|
AUTOMATED MOSAICKING OF MULTIPLE 3D POINT CLOUDS GENERATED FROM A DEPTH CAMERA
|
H. Kim, W. Yoon, T. Kim
|
https://doi.org/10.5194/isprs-archives-xli-b3-269-2016
| 2,016
|
CC-BY
|
isprs/89c61eb5_ced6_4798_8f94_dc7007e718f8.md
|
Experimental Comparison between Mahoney and Complementary Sensor Fusion Algorithm for Attitude Determination by Raw Sensor Data of Xsens IMU on Buoy
[PERSON], [PERSON], [PERSON]
1 University of Tehran, School of Surveying and Geospatial Engineering, Tehran, Iran - (a.jouybari, ardalan) @ut.ac.ir
2 University of Tasmania, School of Land and Food, Hobart, Tasmania, Australia - [PERSON]### Abstract
The accurate measurement of platform orientation plays a critical role in a range of applications including marine, aerospace, robotics, navigation, human motion analysis, and machine interaction. We used Mahoney filter, Complementary filter and Xsens Kalman filter for achieving Euler angle of a dynamic platform by integration of gyroscope, accelerometer, and magnetometer measurements. The field test has been performed in Kish Island using an IMU sensor (Xsens MTI-G-700) that installed onboard a buoy so as to provide raw data of gyroscopes, accelerometers, magnetometer measurements about 25 minutes. These raw data were used to calculate the Euler angles by Mahoney filter and Complementary filter, while the Euler angles collected by XSense IMU sensor become the reference of the Euler angle estimations. We then compared Euler angles which calculated by Mahoney Filter and Complementary Filter with reference to the Euler angles recorded by the XSense IMU sensor. The standard deviations of the differences between the Mahoney Filter, Complementary Filter Euler angles and XSense IMU sensor Euler angles were about 0.5644, 0.3872, 0.4990 degrees and 0.6349, 0.2621, 2.3778 degrees for roll, pitch, and heading, respectively, so the numerical result assert that Mahoney filter is precise for roll and heading angles determination and Complementary filter is precise only for pitch determination, it should be noted that heading angle determination by Complementary filter has more error than Mahoney filter.
Xsens Kalman Filter, Mahoney Filter, Complementary Filter, Integration, Raw Data, IMU
## 1 Introduction
Different kinds of technologies enable the measurement of orientation, inertial based sensory systems have the advantage of being completely self-contained such that the measurement is independent of motion and environment or location. An IMU (Inertial Measurement Unit) contains gyroscopes and accelerometers enabling the tracking of rotational and transfer movements. In order to measure in three dimensions, tri-axis sensors consisting of 3 mutually orthogonal sensitive arcs are required. A MARG (Magnetic, Angular Rate, and Gravity) sensor is a combination of IMU along with tri-axis magnetic sensor. An IMU alone can only measure an attitude relative to the direction of gravity which is sufficient for many applications ([PERSON] et al., 2007; [PERSON] et al., 2004). MARG systems or AHRS (Attitude and Hending Reference Systems) are able to provide a complete measurement of orientation relative to the direction of gravity and the earth's magnetic field.
A gyroscope measures angular velocity which, sensor orientation will be computed over the time if initial conditions are known ([PERSON], 1971; [PERSON], 1990). Precision gyroscopes are really expensive and grave for most applications while low accuracy MEMS (Micro Electrical Mechanical System) devices are used in a majority of applications ([PERSON] et al., 1998). Accumulating error will occur in computed orientation because of the integration of gyroscope measurement errors. Therefore, gyroscope by itself can not present a complete measurement of orientation. The accelerometer measures the earth's gravitational and magnetometer measures magnetic fields thus, beside a gyroscope they create an absolute reference of orientation. However, these sensors are likely to be subject to high levels of noise; for example, the measured direction of gravity will corrupt by the noise due to the motion of the platform. The task of an orientation filter is to compute a single estimate of orientation through the optimal fusion of gyroscope, accelerometer and magnetometer measurements.
These days The Kalman filter ([PERSON], 1960) plays important role in majority of orientation filter algorithms ([PERSON], 1996; [PERSON] et al., 1999; [PERSON], 2001) and commercial inertial orientation sensors. Different commercial inertial systems have used Kalman-based algorithm; for example, Xsens (Xsens Technologies, 2009), micro-strain (MicroStrain, 2009), VectorNav (VectorNav, 2009), Intersense (InterSense, 2008), PNI (PNI sensor corporation) and Crossbow (Crossbow, 2007). The Kalman-based algorithms for orientation determination from sensor's raw data have a number of disadvantages, however, the widespread use of Kalman-based algorithm has emphesised that they have good accuracy and their effectiveness. Implementation of Kalman-based algorithm can be really complicated ([PERSON] et al., 2009; [PERSON] and [PERSON], 1995; [PERSON], 1996; [PERSON] et al., 1999; [PERSON] et al., 2001; [PERSON], 2006; [PERSON] and [PERSON], 2006). ([PERSON] et al., 2008) developed the complementary filter which is shown to be an efficient and effective solution; however, performance is only validated for an IMU.
We used Mahoney and Complementary Filter for orientation determination from raw data that has been achieved by accelerometer, gyroscope, and magnetometer accelerometer. Their performances are benchmarked against an existing commercial filter (Xsens Kalman Filter (XKF3i)).
## 2 Main Body
### The Complementary Filter
When looking for the best way to make use of a IMU-sensor, thus combine the accelerometer and gyroscope data, a lot of people get fooled into using the very powerful but complex Kalman filter. However, the Kalman filter is great, there are 2 big problems with it that make it hard to use: Very complex to understand and Very hard.
Complementary Filter is extremely easy to understand, and even easier to implement. Most IMU's have 6 DOF (Degrees of Freedom). This means that there are 3 accelerometers, and 3 gyroscopes inside the unit. IMU will be able to measure the precise position and orientation of the object it is attached to. This because an object in free space has 6 DOF. So if we can measure them all, we know everything. The sensor data is not good enough to be used in this way.
We will use both the accelerometer and gyroscope data for the same purpose: obtaining the attitude of the object. The gyroscope can do this by integrating the angular velocity over time. To obtain the attitude with the accelerometer, we are going to determine the position of the gravity vector (g-force) which is always visible on the accelerometer. This can easily be done by using an atan2 function. In both these cases, there is a big problem, which makes the data very hard to use without filter.
The problem with accelerometers:
As an accelerometer measures all forces that are working on the object, it will also see a lot more than just the gravity vector. Every small force working on the object will disturb our measurement completely. If we are working on an actuated system, then the forces that drive the system will be visible on the sensor as well. The accelerometer data is reliable only on the long term, so a \"low pass\" filter has to be used.
The problem with gyroscopes:
It is possible to obtain the angular position by use of a gyroscope. It is very easy to obtain an accurate measurement that was not susceptible to external forces. The less good news was that, because of the integration over time, the measurement has the tendency to drift, not returning to zero when the system went back to its original position. The gyroscope data is reliable only on the short term, as it starts to drift on the long term.
The complementary filter gives us a \"best of both worlds\" kind of deal. On the short term, we use the data from the gyroscope, because it is very precise and not susceptible to external forces. On the long term, we use the data from the accelerometer, as it does not drift. In its most simple form, the filter looks as follows:
\[angle=0.98\times(angle+gyroData\times dt)+0.02\times(accData)\]
The gyroscope data is integrated every timestep with the current angle value. After this it is combined with the low-pass data from the accelerometer (already processed with atan2). The constants (0.98 and 0.02) have to add up to 1 but can of course be changed to tune the filter properly. It is very easy to compare Complementary Filter with Kalman filter.
The Complementary filter algorithm is designed in a way that has to be repeated in an infinite loop. Every iteration the pitch and roll angle values are updated with the new gyroscope values by means of integration over time. The filter then checks if the magnitude of the force seen by the accelerometer has a reasonable value that could be the real g-force vector. If the value is too small or too big, we know for sure that it is a disturbance we don't need to take into account. Afterwards, it will update the pitch and roll angles with the accelerometer data by taking 98% of the current value, and adding 2% of the angle calculated by the accelerometer. This will ensure that the measurement won't drift, but that it will be very accurate on the short term (Jan, 2013).
### Xsens Kalman Filter (XKF3i)
The orientation of the IMU sensor (Xsens MTi-G-700) is computed by Xsens Kalman Filter. XKF3i uses signals of the rate gyroscopes, accelerometers and magnetometers to compute a statistical optimal 3D orientation estimate of high accuracy with no drift for both static and dynamic movements. XKF3 is a proven sensor fusion algorithm, which can be found in various products from Xsens and partner products.
The design of the XKF3i algorithm can be summarized as a sensor fusion algorithm where the measurement of gravity (by the 3D accelerometers) and Earth magnetic north (by the 3D magnetometers) compensate for otherwise slowly, but unlimited, increasing (drift) errors from the integration of rate of turn data (angular velocity from the rate gyros). This type of drift compensation is often called attitude and heading referencing and such a system is referred to as an Attitude and Heading Reference System (AHRS) (MTi User Manual, 2015).
### Study area
A study area was selected in Southern IRAN, Kish Island in Persian Gulf with Coordinates: 26\({}^{\circ}\)32N 53\({}^{\circ}\)58E (Fig. 2).
Figure 1: Complementary filter process schematic (SegBot, 2014)
### Data sets
A field test and data acquisition have done in June 2016 in Kish Island beach. As we can see in (fig. 3) a lightweight buoy with the onboard inertial Xsens sensor used (fig. 4). The inertial sensor needs electrical power supply during the data acquisition, therefore, a boat used for putting a battery on it and to restrain the buoy.
IMU data acquired with 8 HZ data rate during 25 minutes. Despite accelerometer (fig. 5), gyroscope (fig. 6), and magnetometer's data (fig. 7), attitude data which uses Xsens Kalman Filter for computation, also acquired.
### Evaluation result
A glimpse into upper figures, it can be deduced that in addition to the noise in observation, there are drift and bias. In the following, Mahoney, Complementary and Xsense Kalman Filter are used for attitude determination by means of raw data of the sensor, shown in (fig. 8). By looking at (fig. 8), each of three attitude plots by the nearest approximation pr
Figure 4: Xsens IMU Sensor
Figure 5: tri-axis accelerometer data
Figure 3: Lightweight buoy with IMU
Figure 6: tri-axis gyroscope data
Figure 7: tri-axis magnetometer dataIt should be noted, we used Xsens Kalman Filter algorithm as the reference algorithm without any drift and bias, so for evaluation of the accuracy and precision of Mahoney and Complementary Filter, as it can be seen in (fig. 9) & (fig. 10), we compared them with Xsens Kalman filter algorithm.
Also, the standard deviation of this comparison brought in (tab. 1). Due to the (tab. 1) the mean differences between Mahoney filter and XKF3i for roll, pitch, and heading angles respectively almost are -1.36\({}^{\circ}\)10\({}^{\circ}\), 1.73\({}^{\circ}\)10\({}^{\circ}\), 0.1855. On the other hand, the standard deviation of differences between Mahoney filter and XKF3i for roll, pitch, and heading angles respectively almost are -0.5644, 0.3872, 0.4990 and the standard deviation of the differences between the Complementary filter and XKF3i for roll, pitch, and heading angles respectively almost are -0.6349, 0.2621, and 2.3778.
(Fig. 11, 12, 13) show the roll, pitch, and heading angles diagram of standard deviation between Mahoney and Complementary filters. As it's clear from (fig. 11), the standard deviation of Mahoney algorithm is lower than the Complementary algorithm, therefore, Mahoney algorithm for roll angle determination is more accurate.
But this principle is not true for pitch angle determination (fig. 12). Because of the lower standard deviation of the Complementary algorithm, it is more accurate for pitch angle determination.
Eventually, can be claimed that Complementary algorithm is not appropriate for heading angle determination, due to greater standard deviation with respect to Mahoney algorithm.
## 3 Conclusion
In this research, we used Mahoney, Complementary, and XKF3i algorithms for attitude determination from raw data of accelerometer, gyroscope, and magnetometer. In order to collect data, a test field by means of a lightweight buoy with onboard Xsens IMU is done in Kish Island. Each of algorithms for accuracy evaluation is compared with XKF3i, so, due to presented results, it is proved that Complementary algorithm is only sufficient for pitch angle determination, while, Mahoney algorithm is more accurate for roll and heading angles determination. Accordingly, it is suggested that presented algorithm be used for different uses such as Marine Engineering Sciences, Hydrography, and Oceanography.
## Acknowledgements (OPTIONAL)
The authors would like to thank hydrographic office of national cartographic center of Iran for helping to build buoy and data acquisition in Kish Island.
## References
* [PERSON] (1995) [PERSON] and [PERSON], 1995. Inertial navigation systems for mobile robots. 11(3), pp. 328-342.
* [PERSON] (1971) [PERSON], 1971. A new mathematical formulation for strapdown inertial navigation. (1), pp. 61-66.
* Crossbow Technology Inc (2007) Crossbow Technology Inc, 2007. AHRS400 Series User's Manual. 4145 N. First Street, San Jose, CA 95134, rev. c edition.
* [PERSON] et al. (2007) [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON], 2007. A complementary filter for attitude estimation of a fixed-wing uaw with a low-cost lmu. In 6 th International Conference on Field and Service Robotics.
* [PERSON] (1996) [PERSON], 1996. Inertial head-tracker sensor fusion by a complementary separate-bias kalman filter. In Proc. Virtual Reality Annual International Symposium, the IEEE 1996, pp. 185-194.
* [PERSON] (1990) [PERSON], 1990. Optimal strapdown attitude integration algorithms. In Guidance, Control, and Dynamics, volume 13, pp 363-369.
* InterSense (2008) InterSense, Inc., 2008. InertiaCube2+ Manual. 36 Crosby Drive, Suite 150, Bedford, MA 01730, USA, 1.0 edition.
* [PERSON] (2003) Jan, p., 2003. Reading a IMU Without Kalman: The complementary Filter [[http://www.pieter-jan.com/node/11](http://www.pieter-jan.com/node/11)]([http://www.pieter-jan.com/node/11](http://www.pieter-jan.com/node/11)) (26 Apr. 2013).
* [PERSON], [PERSON], and [PERSON] (2009) [PERSON], [PERSON] and [PERSON], 2009. A robust gyroless attitude estimation scheme for a small fixed-wing unmanned aerial vehicle, pp. 666-671.
* [PERSON] (1960) [PERSON], 1960. A new approach to linear filtering and prediction problems. Journal of Basic Engineering, 82, pp. 35-45.
* [PERSON] et al. (1999) [PERSON], [PERSON] and [PERSON], 1999. Estimation of orientation with gyroscopes and accelerometers. In Proc. First Joint [Engineering in Medicine and Biology 21 st Annual Conf. and the 1999 Annual Fall Meeting of the Biomedical Engineering Soc.] BMES/EMBS Conference, volume 2, pp. 844.
* [PERSON] and [PERSON] (2004) [PERSON] and [PERSON], 2004. Inclination measurement of human movement using a 3-d accelerometer with autocalibration. 12(1), pp. 112-121.
* [PERSON] and [PERSON] (2006) [PERSON] and [PERSON], 2006. Measuring orientation of human body segments using miniature gyroscopes and accelerometers. Medical and Biological Engineering and Computing, 43(2), pp. 273-282.
* [PERSON] et al. (2008) [PERSON], [PERSON] [PERSON] [PERSON], 2008. Nonlinear complementary filters on the special orthogonal group. Automatic Control, IEEE Transactions on, 53(5), pp. 1203-1218.
* [PERSON] et al. (2001) [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], 2001. An extended kalman filter for quaternion-based orientation estimation using marg sensors. In Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems, volume 4, pp. 2003-2011.
* MicroStrain Inc. (2009) MicroStrain Inc., 2009. 3 DM-GX3 -25 Miniature Attitude Heading Reference Sensor, 459 Hurricane Lane, Suite 102, Williston, VT 05495 USA, 1.04 edition.
* MTi User Manual (2015) MTi User Manual, 2015. MTi 10-series and MTi 100-series. Document MTD605P, Revision F, 27 February 2015.
* PNI sensor corporation (2009) PNI sensor corporation., Space point Fusion. 133 Aviation Blvd, Suite 101, Santa Rosa, CA 95403-1084 USA.
* [PERSON] (2006) [PERSON], 2006. Quaternion-based extended kalman filter for determining orientation by inertial and magnetic sensing. 53(7), pp. 1346-1356.
* SegBot (2003) SegBot, 2003. Complementary Filter [[http://www.arxterra.com/segbot-complementary-filter/](http://www.arxterra.com/segbot-complementary-filter/)]([http://www.arxterra.com/segbot-complementary-filter/](http://www.arxterra.com/segbot-complementary-filter/)) (04 Dec. 2014).
* VectorNav Technologies (2009) VectorNav Technologies, 2009. LLC. VN -100 User Manual. College Station, TX 77840 USA, preliminary edition.
* Xsens Technologies (2009) Xsens Technologies B.V., 2009. MTi and MTx User Manual and Technical Documentation. Pantheon 6a, 7521 PR Enschede, The Netherlands.
Fig. 13: Mahoney and Complementary Heading [PERSON], [PERSON] and [PERSON], 1998. Micromachined inertial sensors. 86(8), pp. 1640-1659.
|
isprs
|
EXPERIMENTAL COMPARISON BETWEEN MAHONEY AND COMPLEMENTARY SENSOR FUSION ALGORITHM FOR ATTITUDE DETERMINATION BY RAW SENSOR DATA OF XSENS IMU ON BUOY
|
A. Jouybari, A. A. Ardalan, M.-H. Rezvani
|
https://doi.org/10.5194/isprs-archives-xlii-4-w4-497-2017
| 2,017
|
CC-BY
|
isprs/41e5f1cd_bf26_48ad_9840_e91c27a8e55f.md
|
# Co-Registration of 3D Point Clouds by Using an Errors-In-Variables Model
[PERSON]
[PERSON]
[PERSON]
[PERSON]
1 Istanbul Technical University (ITU), Faculty of Civil Engineering, Department of Geomatics Engineering, 34469 Maslak, Istanbul, Turkey - (aydaru, oaltan, akyilma2)@itu.edu.tr
1 Isik University, Faculty of Engineering, Department of Civil Engineering, 34980 Sile, Istanbul, Turkey - ([EMAIL_ADDRESS])
Footnote 1: I. I.
estimation values, alternative approaches which take the stochastic properties of the elements of design matrix into consideration should be applied. The problem can be solved by using a model which is called in the literature as Errors-in-Variables model. ([PERSON] and [PERSON], 2007) outlined different solution methods and application areas of EIV model in detailed. ([PERSON] and [PERSON], 1997) proposed to use the total least squares (TLS) approach for the registration of Dimensional data. They used a mixed solution which is the combination of least squares and total least squares methods for the registration of 2D medical images. However, they do not give any information about the precision of the transformation parameters. ([PERSON], 2007) used the total least squares method for coordinate transformation in geodetic applications. Since a closed-form solution method is used in this study, there is no information about precision of the estimated parameters as well. A mathematical model has been given by ([PERSON], 2010) where an iterative Gauss-Helmert type of adjustment model with the linearized condition equations is adopted. However, in this method the size of the normal equations to be solved increases dramatically with the number of conjugate points, since each corresponding point pair introduces three more Lagrange multipliers into the normal equations. Thus, the larger the number of conjugate points, the greater the normal equations to be solved.
For an optimal solution of the so-called EIV problem, we propose a modified iterative Gauss-Helmert type of adjustment model. In this model, the rotation matrix \(R\) is represented in terms of unit quaternions \(q\)= \(\{q_{0}\ q_{1}\ q_{2}\ q_{3}\}\) in order \(i\)) to satisfy the special structure of the design matrix \(A\) and \(ii\)) to reduce the number of iterations for fast convergence. Moreover, the dimension of the normal equation matrix to be solved is dramatically reduced to the number of unknown transformation parameters which is six for the rigid-body transformation problem. The mathematical model has been implemented in MATLAB programming environment. This work aims at comparing the proposed TLS parameter estimation model with the conventional LS model for the point cloud co-registration problem.
## 2 Errors-in-Variables Model
The aim of co-registration process is to transform the search surface with respect to the template surface by establishing the correspondences between two overlapping data sets. Once the appropriate point correspondences are established between two point data sets, the basic computation is to estimate the transformation parameters using the point correspondences. The geometric relationship is established by the six parameters of the 3D rigid-body transformation.
In the conventional Gauss-Markov model, Eq. (1) represents the observation equation which assumes the template surface elements are observations and they are the only contaminated part by the random errors. Alternatively in the EIV model, the search surface elements are also erroneous and a true error vector should be added to these elements as well. The observation equations in EIV model are formed as
\[y\ +e_{y}=t+R*(x+e_{x}). \tag{3}\]
If we apply this model to 3D rigid-body transformation, the mathematical model is established as;
\[l\ +v_{x}=(A+v_{A})*\beta \tag{4}\]
where \(v_{x}\) is the \(n\times 1\) vector of observation errors and \(v_{A}\) is an \(n\times m\) error matrix of the corresponding elements of design matrix. Elements of both \(v_{x}\) and \(v_{A}\) are independently and identically distributed with zero mean. Once a minimisation [\(\beta_{A}\); \(\beta_{A}\)] is found, then any \(\beta\) satisfying \((A-\beta_{A})\cdot\beta=l+\beta_{x}\) is the solution of the problem by TLS.
### Proposed Modified Gauss-Helmert Model
Generalized total least squares solution of the 3D-similarity transformation by introducing the quaternions as the representation of the rotation matrix*scale factor (S=8) based on iteratively linearized Gauss-Helmert model has been successfully presented by ([PERSON], 2011). However, this model requires the solution of a normal matrix which includes the corresponding terms for transformation parameters as well as the Lagrange multipliers, thus yielding a larger size of system of equations to be solved at each iteration with the increase of the identical points of the transformation problem.
Following the idea in ([PERSON], 2011), ([PERSON] and [PERSON], 2012) has developed a new computational scheme for 3D-similarity transformation which they call _Modified Iterative Gauss-Helmert_ model by reducing the so-called Lagrange multipliers and hence the size of the normal matrix is dramatically reduced. In other words, the unknowns to be solved at each iteration are equal to seven, i.e. the number of transformation parameters. This kind of a reduction provides advantage, in terms of computational aspects especially. We refer to ([PERSON] and [PERSON], 2012) for details of the mathematical model. Modified Gauss-Helmert model in ([PERSON] and [PERSON], 2012) is a seven parameters similarity transformation. Therefore, in our study, we modified the model by eliminating the scale factor in order to apply 6 parameters rigid-body transformation. For this purpose we normalise the quaternion by using the \(q_{0}^{2}+q_{1}+q_{2}+q_{3}=1\) equality. Then the rotation matrix defined by quaternions is obtained as;
\[S=\begin{bmatrix}-2q_{1}^{2}-2q_{2}^{3}+1&2q_{1}42-2q_{3}N&2q_{2}N+2q_{1}q_{3} \\ 2q_{3}N+2q_{1}q_{2}&-2q_{1}^{2}+2-2q_{3}^{2}+1&2q_{1}N\\ 2q_{1}q_{3}-2q_{2}N&2q_{1}N+2q_{2}q_{3}&2q^{2}+1\end{bmatrix} \tag{5}\]
\[N=\sqrt{(-q_{1}^{2}-q_{2}^{2}-q_{3}^{2}+1)}\]
In so-called model, let \(a_{i}\) and \(b_{i}\) are the corresponding pairs (i=1, ,M) ; Qxx[\(a_{i}\)] and Qxx[\(b_{i}\)] are normalized covariance matrices; \(\bar{a}_{i}\) and \(\bar{b}_{i}\) are the true positions of \(a_{i}\) and \(b_{i}\) respectively. The optimal estimation of the similarity transformation parameters R (rotation), T (translation) and s (scale factor) in the sense of Maximum Likelihood is to minimize the Mahalanobis distance given as follows.
\[J=\frac{1}{2}\sum_{i=1}^{M}(a_{i}-\bar{a}_{i})^{T}Q_{m}{}^{-1}[a_{i},\bar{a}_{ i}-\bar{a}_{i})+\frac{1}{2}\sum_{i=1}^{M}(b_{i}-\bar{b}_{i})^{T}Q_{m}{}^{-1}[b_{i}, \bar{b}_{i}-\bar{b}_{i}) \tag{6}\]
And
\[\bar{a}_{i}=S\bar{b}_{i}+T \tag{7}\]
Since the model is non-linear, it is linearized by the Taylor Series expansion. Finally, the total error vector is defined as
\[e_{i}=a_{i}-\text{S}b_{i}-\text{T} \tag{8}\]With the weight matrix;
\[W_{i}=(SQxx[a_{i}]S^{\prime}+Qxx[b_{i}])^{-1} \tag{9}\]
After modifications, Eq. 6 can be expressed in the following form:
\[J=\tfrac{1}{2}\sum_{1}^{M}(a_{i}^{\prime}W_{i}e_{i}) \tag{10}\]
Differentiating (6) with respect to qi, i = 1, 2, 3
\[\frac{\partial S}{\partial q_{i}}=2Q_{i}\]
We define a 3x3 U\({}_{1}\) matrix as follows
\[U_{i}=[Q_{1}b_{i}\ Q_{2}b_{i}\ Q_{3}b_{1}] \tag{11}\]
After these definitions, parameters are estimated by the solution of following 6-D linear equation.
\[\begin{pmatrix}\sum_{i}^{M}U_{i}^{\top}W_{i}U_{i}&\sum_{i}^{M}U_{i}^{\top}W_{ i}\\ \sum_{i}^{M}W_{i}U_{i}&\sum_{i}^{M}W_{i}\\ \end{pmatrix}\begin{pmatrix}\Delta q\\ \Delta T\end{pmatrix}=\begin{pmatrix}\sum_{i}^{M}U_{i}^{\top}W_{i}e_{i}\\ \sum_{i}^{M}W_{i}e_{i}\end{pmatrix}\]
Since so-called model is non-linear, initial approximations of q and T are updated and iteration is repeated until it converges.
### Correspondence Search
The correspondence search has been carried out with respect to two different error metrics. The first one is point-to-point error metric which was introduced by ([PERSON] and [PERSON], 1992) in their original ICP paper. According to this method, each available point in template surface is matched with a point in search surface which has the minimum Euclidean distance. This procedure is very complex in terms of computational aspects and takes the most part of the time. The procedure has been accelerated by using a kd-tree searcher in our implementation.
This error metric, tries to find a correspondence for each point in template surface even for non-overlapping areas which usually leads false matchings. It is possible to prevent this kind of false matchings by introducing some conditions. In our implementation, the first condition that we used is a threshold value for Euclidean distances. The point pairs whose Euclidean distances exceeding this value were excluded from the matching. The second condition used at this part is a boundary condition. Points on the border of the object and in addition the data holes inside the model have been excluded from matching as well. As the result of the implementation, indexes of best matching points in two data sets and the Euclidian distances were obtained.
The second error metric is point-to-plane algorithm which was introduced by ([PERSON] and [PERSON], 1991). The reason of using these two error metrics together is to take the advantages of both methods. Although each iteration of the point-to-plane ICP algorithm is generally slower than the point-to-point version, researchers have observed significantly better convergence rates in the former ([PERSON], 2001). So we used the correspondences coming from the point-to-point algorithm to narrow the search area and speed up the process at point-to-plane search. According to this condition, one point's search area is limited with the maximum 8 neighboring triangles of the matched point at first step. After this limitation, a significant decrease at processing time has been observed.Then; the correspondence operator starts to seek a minimum Euclidean distance location on the limited search surface. Here there is another issue that should be taken into consideration. The point that has the minimum distance value must lie within the related triangle when it is projected towards the unit normal vector. Therefore, the point providing the minimum distance was projected onto surface towards unit normal vector and touch point representing the point was calculated. After then it was checked whether that touch point is inside the triangle or not by applying a point-in-polygon test. Points which passing the point-in-polygon test were listed as corresponding point of the related point in template data set.
### Experimental Results
The EIV model algorithm was implemented in MATLAB programming language. Additionally, another implementation which is based on the Gauss-Markov model was made again using the MATLAB in order to make comparisons between two models. The data set is a part of _\"Feary Heracles\"_ statue which has been scanned by Breuckmann optoTOP-HE coded structured light system. The average point spacing of the data is 0.5 mm.
Figure 1: 3D comparion of TLS and LS registered data.
Figure 2: Residuals after TLS estimation.
## 3 Conclusion
The motivation of the study is to investigate the error behaviours of parameter estimation of rigid-body transformation by applying EIV model which considers the both data sets are characterized as erroneous. An implementation has been made in MATLAB computing language for the comparison of two models. The first experimental test with the _'Weary Herakles'_ data presented. However, more tests with different data sets have to be carried out. Our future plan is to carry out more experiments by using;
* Real data sets coming from different type of sensors.
* Synthetic data with various noise level.
* Data sets that have different a posteriori co-variance matrices.
## References
* [PERSON] (2010) [PERSON], 2010. Co-registration of surfaces by 3D Least Squares matching. Photogrammetric Engineering and Remote Sensing, 76 (3), 307-318.
* [PERSON] (2007) [PERSON]\" Total Least Squares Solution of Coordinate Transformation\", Survey Review, 39, 303 pp.68-80 (January 2007)
* [PERSON] (2011) [PERSON], 2011, Solution of the heteroscedastic datum transformation problems, Abstract of 1 st Int. Workshop the Quality of Geodetic Observation and Monitoring Systems, April, 2011, Garching/Munich, Germany. [[http://www.gih.uni-](http://www.gih.uni-)]([http://www.gih.uni-](http://www.gih.uni-)).
* [PERSON] (2011) [PERSON], and [PERSON], \"A method for registration of 3-D shapes,\" _IEEE Transactions on pattern analysis and machine intelligence_, vol. 14, 1992, p. 239-256.
* [PERSON] (2009) [PERSON]; [PERSON]; \"Object modeling by registration of multiple range images,\" _Robotics and Automation, 1991. Proceedings., 1991 IEEE International Conference on_, vol., no., pp.2724-2729 vol.3, 9-11 Apr 1991
* [PERSON] (1985) [PERSON], 1985. Adaptive least squares correlation: A powerful image matching technique, S. Afr. Journal of Photogrammetry, Remote Sensing and Cartography, 14(3): 175-187.
* [PERSON] and [PERSON] (2005) [PERSON], and [PERSON], 2005. Least squares 3D surface and curve matching, ISPRS Journal of Photogrammetry and Remote Sensing, 59(3):15-174.
* [PERSON] and [PERSON] (2012) [PERSON], [PERSON], 2012,\"Optimal Computation of 3-D Similarity-Gauss-Newton vs. Gauss-Helmert\"Memoirs of the Faculty of Engineering, Okayama University, Vol. 46, pp. 21-33, January 2012
* [PERSON] (2006) [PERSON], [PERSON], and [PERSON], 2006. Least squares matching for airborne laser scanner data, Proceedings of the 5 th International Symposium Turkish-German Joint Geodetic Days, 29-31 March, Berlin, Germany, unorganized CD-ROM.
* [PERSON] (2002) [PERSON], 2002. Methods for measuring height and planimetry discrepancies in airborne laserscanner data,
\begin{table}
\begin{tabular}{l l c c c c c c c} \hline Model & No. of & & \(T_{x}\) & \(T_{y}\) & \(T_{z}\) & \(\omega\) & \(\varphi\) & \(\kappa\) \\ & Matched & \(\sigma_{0}\) & \(\sigma_{T_{x}}\) & \(\sigma_{T_{y}}\) & \(\sigma_{T_{x}}\) & \(\sigma_{uu}\) & \(\sigma_{\varphi}\) & \(\sigma_{\kappa}\) \\ & Points & (mm) & (mm) & (mm) & (mm) & (grad) & (grad) & (grad) \\ \hline TLS & 36941 & 0.0292 & 0.463 & 0.636 & -0.669 & 399.1392 & 399.4543 & 399.5309 \\ & & & 0.00034 & 0.00040 & 0.00028 & 0.000006 & 0.00006 & 0.00004 \\ LS & 36856 & 0.0408 & 0.461 & 0.623 & -0.658 & 399.1342 & 399.4464 & 399.5278 \\ & & & 0.00033 & 0.00040 & 0.00028 & 0.000006 & 0.000006 & 0.000004 \\ \hline \end{tabular}
\end{table}
Table 1: Numerical results of _‘Weary Heracles’_ data
Figure 3: Residuals after LS estimationPhotogrammetric Engineering & Remote Sensing, 68(9):933-940.
* [PERSON] & Huffel (2007) [PERSON], [PERSON], \" Overview of total least squares methods \", Signal Processing 87 (2007) 2283-2302
* [PERSON] (2007) [PERSON], [PERSON] [PERSON] (2007): Total Least Squares (TLS) im Kontext der Ausgleichnag und kleinsten Quadraten am Beispiel der ausgleichenden Geraden, ZFV, Vol. 133,141-148.
* [PERSON] (1997) [PERSON] [PERSON]; [PERSON] [PERSON], 1997, \"Total least squares fitting of two point sets in m-D,\" Decision and Control, 1997., Proceedings of the 36 th IEEE Conference on, vol.5, no., pp.5048-5053 vol.5, 10-12 Dec 1997
* [PERSON] (2001) [PERSON] & [PERSON], 2001, \"Efficient variants of the ICP algorithm,\" in Proc. 3 rd Int. Conf. 3D Digital Imaging and Modeling (3 DIM), Jun. 2001, pp. 145-152.
|
isprs
|
CO-REGISTRATION OF 3D POINT CLOUDS BY USING AN ERRORS-IN-VARIABLES MODEL
|
U. Aydar, M. O. Altan, O. Akyılmaz, D. Akca
|
https://doi.org/10.5194/isprsarchives-xxxix-b5-151-2012
| 2,012
|
CC-BY
|
isprs/7b1ce45f_51b6_4ed2_b052_d6f328b94bf2.md
|
[PERSON] - D.I.S.T.A.R.T. - Facolta di Ingegneria - Universita degli Studi di Bologna - Italie
[PERSON] - D.I.S.T.A.R.T. - Facolta di Ingegneria - Universita degli Studi di Bologna - Italie
[PERSON] - Bibliotheque Nationale de France - France
ISPRS Commission V, Working Group 4
KEY WORDS: Photogrammetry, Architectural Information Systems, 3D - Objects Representation, Architectural objects
###### Abstract
The PAROS (Photogrammetrie Architecturale Representation par Ordinateur et outils de Synthese) project aims to correlate architectural survey technics with data structuring and knowledge representation instruments, with information systems management and with instruments for representation of synthesis images fixed or animated.
In the general agreement started by the Mission de la recherche et de la technologie, the Atelier de photogrammetrie de la Direction du Patrimoine - Ministere de la Culture et de la Francophonie, GAMSAU (Groupe d'etudes pour f'Application des Methodes Scientifiques a f'Architecture et a l'Urbanisme) and the Istituto di Topografia dell'Universita di Bologna have joined the PAROS research program. This program will be developped in three years, starting from 1994. Particularly those objects regarding the elaboration process of a preservation project, with a proper computer language.
This report presents some aspects faced during the analysis of the architectural information system.
## Resume
Le projet PAROS (Photogrammetrie Architecturale Representation par Ordinateur et outils de Synthese) est un programme de recherche visant a associer les techniques de releves architecturaux - en particulier la photogrammetrie - d'une part aux instruments de structuration des donnees et de representation de la connaissance, fautre part aux systemes de gestion des informations et enfin aux outils de representation en images de synthese fixes ou aimmes.
Dans le cadre d'une convention initite par la Mission de la recherche et de la technologie, f'Atelier de Photogrammetrie architecturale de la Direction du Patrimoine - Ministere de la Culture et de la Francophonie, le GAMSAU (Groupe d'etudes pour f'Application des Methodes Scientifiques a f'Architecture et a l'Urbanisme) et l'istituto di Topografia, Geodesia e Geofisica Mineraria dell'Universita di Bologna se sont associes dans le programme de recherche PAROS.
Ce programme a debute en 1994 et doit se developper en trois ans.
L'objectif de mettre au point un systeme informatique pour le controle des interventions de conservation correspond a Fexigence de rationnhalf series processus de gestion du patrimoine architectural.
Dans cette intervention, seront presentes quelques themes relevant de l'analyse du systeme informatique. En particulier les aspects relatifs au processus d'elaboration d'un projet de conservation selon un langage plus particulierement informatique.
Figure 1: Villa Foscari “La Malcontenta” (Venise - Italie). Restitution analytique de la facade principale
## 1 Introduction
Le projet d'un systeme informatique pour le patrimoine architectural ne peut faire abstraction de l'analyse de \"Tobjet\" architectural et des \"sujets\" qui interviennent dans un projet de conservation.
L'objet architectural d'etre connu et defini sous tous ses aspects. Le secteut cogniff est schematiquement constitue: de f'etude des mesures de l'edifice, de l'analyse historcorcitique, de ses caracteres structuraux, de ses caracteres constructifs, de l'analyse de son etat de conservation.
Le releve pour le projet de conservation et donc les operations de selection des points significatifs mais aussi leur valeur metrique peut aider a l'elaboration d'un diagnostic et donc de la therapie.
Le releve fournit en outre au secteur d'analyse et de diagnostic du projet les informations concernant les degradations des materiaux, leur localisation et leur quantification.
La representation du releve geometrique de f'edifice, desormais directement disponible sous forme numerique grace aux actuels systemes de releves topographiques et photogrammetriques, des plus simples aux plus sophistiques, devient la base \"localisante\", qui assure l'organisation et le positionement de toutes les informations.
A ce stade, il est necessaire de comparer la totalite de f'objet architectural a ses differents elements affectes chacun de leurs proprietes tridimensionnelles.
Chaque objet graphique peut etre analyse donc soit a travers ses caracteristiques geometriques qui decrivent la forme, soit a travers les typologies qui analysent le systeme des relations possibles avec les autres objets.
La tridimensionnailte offer diverses opportunites soit au niveau de la representation: il est possible par exemple d'elaborer des images de synthese; soit au niveau du projet de conservation pour traiter et/ou simuler des phenomenes d'ausculation; soit au niveau du systeme informatif comme receptacle des informations de natures diverses rapportees a la geometrie des elements architectaux.
## 2 L'Acquisition des Donnees metriques tridimensionnelles
Le probleme principal dans la formulation du releve tent plus dans la codification des donnees tridimensionnelleselleselles-meme que dans leur acquisition.
## 3 PROGETTO DI RESTITUZIONE DELLE FACCIATE
_Projet de restitution des facades_
\begin{tabular}{|c|c|c|c|} \hline
**CODICE DI FACCIATA** & **LIVELLO** & **MACRO-STRUTURA** & **ELEMENTO** \\
**suddivisione orizzontale** & **suddivisione verticale** & **2 cifre** & **ARCHITETONICO** \\
**2 cifre** & **1 cifra** & & & 2 cifre \\ \hline \multicolumn{1}{|c|}{_CODE DE FACADE_} & \multicolumn{1}{c|}{_NIVEAU_} & \multicolumn{1}{c|}{_MACRO-STRUCTURE_} & \multicolumn{1}{c|}{_ELEMENT_} \\
**decoupage horizontal** & \multicolumn{1}{c|}{_découpage vertical_} & \multicolumn{1}{c|}{_2 chiffres_} & \multicolumn{1}{c|}{_ARCHITECTURAL_} \\
**2 chiffres** & \multicolumn{1}{c|}{_1 chiffre_} & & & 2 chiffres_ \\ \hline \end{tabular}
Table 1 - Schema de codification pour la restitution des facades de la Villa \"Malcontenta\"
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**CODICE DI FACCIATA** & **LIVELLO** & **MACRO-STRUTURA** & **ELEMENTO** \\
**suddivisione orizzontale** & **suddivisione verticale** & **2 cifre** & **ARCHITETONICO** \\
**2 cifre** & **1 cifra** & & & 2 cifre \\ \hline \multicolumn{1}{|c|}{_CODE DE FACADE_} & \multicolumn{1}{c|}{_NIVEAU_} & \multicolumn{1}{c|}{_MACRO-STRUCTURE_} & \multicolumn{1}{c|}{_ELEMENT_} \\
**decoupage horizontal** & \multicolumn{1}{c|}{_découpage vertical_} & \multicolumn{1}{c|}{_2 chiffres_} & \multicolumn{1}{c|}{_ARCHITECTURAL_} \\
**2 chiffres** & \multicolumn{1}{c|}{_1 chiffre_} & & & 2 chiffres_ \\ \hline \end{tabular}
\end{table}
Table 1: Schema de codification pour la restitution des facades de la Villa “Malcontenta”
## 3. La decomposition de l'ARCHITECTRE SELON UN FORMALING \"ORIENTe OBJECT\"
L'approche orientee objet permet la creation d'un modele representation semantique des objets degpassant leur simple description geometrique.
Ce schema fonctionnel du systeme permete d'individualiser non seulement la geometrie, mais aussi de lier les autres informations de nature diverse, tant aux simples entites qu'a la strucutre a lauquelle elles se referent.
Cette methode permet de decrire l'edifice construit comme une \"aggregation de connissances\" des entities elementaires decrites a travers leurs attributs propres (dimensions, topologie, etc ), en beneficiant, des mecanismes de filiation (factorisation de proprietes) directement derives d'une structure de representation par specialisation et aggregation.
La representation de la connissance avec un systeme \"oriente objet\" nous offre des formalismes et des instruments adaptes aux problemes poses.
La description des objets du secteur analyse passe donc par une etage d'abstraction, puis el elaboration du modele.
La hierarchie des classes coexise et dialogue dans PAROS a partir de l'acquisition des donnees.
Les classes representent la connissance generique (decomposition du corpus architectonique, regles de composition, rapports de proportions) et egalement la connissance specifique.
Le chok du langage \"oriente objet\" simpose dans PAROS tant pour la synthese dimages, que pour la gestion de foeuvre architectonique.
En fait, elle permet schematiquement de:
- fourir des quulis pour la modelisation d'univers de connissance complexe;
- gerer les problemes de l'architecture, de geometrie euclidienne, de calcul matriciel, de moindre carre, de problemes lies a la visualisation;
- gerer les multi-platformes;
- choisir la souplesse d'un langage plus adapte a la problematique et qui permette une plus grande liberte dans la colcete d'informations de natures et de provenances diverses.
### Quelques notions des analyses orientees objet
Rappellons quelques notions fondamentales d'analyses orientees objet utilisees pour un projet d'etude des objets architectoniques.
L'objet est l'entite elementaire du langage constitue de f'association d'une certaine memoire particulierement compose de champs, appeles variables, d'une instance et d'un ensemble d'operations appeelees methodes.
Une classe d'eorti les caracteristiques communes a tous les objets qui representent le meme type de choses.
Les instances sont les objets individuels decrits d'une classe.
En resultent fondamentalen dans l'anl'es orientee objet - - l'encapsulage ou l'objet ainsi defini est une entite atomique ses variables n'etant pas partagees par d'autre objets.
Le monde exteme ne connait pas la structure interme deobjet, mais son comportement.
- Les messages qui concernent la recherche se trouvent subordonnes a un objet parce que effectues par une de ces operations.
Le message specifique par lequel l'operation d'oit 'effector revient a savoir comment cette demiere se traduit par l'objet.
L'ensemble des messages auquel l'objet peut repondre s'appelle interface.
Il est donc le passage obligatoire pour interagir avec Tojolet: chaque action sur Tojolet renvoie autilisation de cette interface.
- Le polymorphisme. En tenant compte de ce qui vient d'etre oit, on note que un certain message peut avoir une signification abstrite ou au contraire renvoyer a des objets diverses, des effets divers: ceci est le premier sens de polymorphisme.
Les fonctions virtuelles associees a une classe supereiure a des classes derivees sont destinees a etre redefnies dans ces classes derivees.
Les operateurs surcharges conservent les proprietes lexicales et syntaxiques des operateurs predefinis, mais leur defintion peut etre modifie par le programmeur par de ordifienten a donvelle classe: cette nouvelle definition sera alors automatiquement utilisee quant ses operateurs seront membres de la classe en question.
- L'hefredite qui se divise en:
- herdefite simple: une classe peut etre definie a travers le raffnement d'une classe preexistante et appeele alors sous-classe et rehrie des variables et des methodes de la classe base; a ceci on peut ajouter ses proprietes et les methodes heritees (en leur donnant une nouvelle definition).
Un message envoye a un objet provoque la recherche de la methode correspondante en remontant Farbre de Theredite la premiere trouvee (la plus specialisee), qui sera maintenue.
- herdefite multiple: une classe peut dans certaines langages et en particulier dans le langage C++, heriter de plusieurs autres classes.
Si il n y a donc pas un arbre d'heredite mais un graphe d'heredite on d'oit gerer les strategies de recherche des methodes et eviter les ambigutes.
Dans la programation oriente objet les entities les plus externes sont les objets. L'ensemble des actions est defini a l'interieur des objets a titre d'attribut.
## 4. LE systeme informatique
Le systeme informatique pour la conservation des edifices devra prendre en consideration d'une part Tidentification des objets selon leur definition, soit a partir du lexique properement architectonique, soit selon quelques reples informatiques reprises a rap rapporche choisie; Tidentification de la structure pour la representation de la complexite du domaine d'analyse, dans notre cas l'architecture classique et ses regles de composition; de l'autre l'identification des problematiques liees a l'inventaire documentaire et aux interventions sur le patrimoine architectural dans leur influence soit sur les objets singuliers, soit sur les structures plus generales, l'identification des attributs de chaque classe et de chaque structure comme element de specification ultreiure aussi bien de chaque structurecomme element considere que des informations a lui attribuer, et la la definition, enfin, des services que le systeme pourra developer pour satisfaire aux exigences des diveses typologies dutilsateurs.
Aujourd'hui existent de nombreux instruments informatiques qui en partant d'une texte, d'une image ou d'un modele permetent d'accoder aux donnees qui leur correspondent; les hypertextes par exemple en apparence de simples tests informatises permettent de selectioner des mots et d'acceder a des explications specifiques de celles-ci, les programmes CAD fourmissent la possibilite de selectionner une partie du modele graphique et de rechercher les informations qui leur sont assoceles; des instruments analogues permetent des operations simlaires sur des images photographiques; egalement les feuilles de calcul electroniques possedent des fonctions d'assemblage dans une base de donnees. Il en resulte qui manque un programme qui gere contemporariement toutes les resources.
En pratique, on commence en effectuant une premiere distinction entre objets informatisables et objets non informatisables.
Un objet non informatisable est tel ou par souci economique ou par difficulto objective d'apprendender lei phenomene relen. Les objets informatisables se trouvent opendumment retenus et disponibles a l'interieur d'une banque de donnees, textes, dessins, photographiques, donnees qui deja se trouvent en forme numerique, qu'on numerise si necessaire et sont accessibles et d'increment utilables. Une des particulanties des banques de donnees informatise que [PERSON] veut realiser tent dans le pouvoir d'acceder aux donnees non seulement en mode passif (la consultation) mais aussi en mode acit (anlyse, integration), quand l'usage effectuera une interrogation i pouracoder non a la representation d'objet, mais a f'objet lui-meme, avec la possibilite de le manipuler et de Tintegreur selon des exigences propres.
C'est a dire quil est possible, parce que les objets sont solilites a l'interieur de l'application qui les a realises et qui en possede les instruments de controle.
Les objets non informatisables resteront rassembles dans leur collection physique reelle et la banque de donnees fournir leis informations pour leur reperage.
Une verification experimentale est en cours de realisation sur la Villa Foscari dite \"La Malcontenta\" situee pres de Venise.
L'edifice de [PERSON] represente un cas interessant car nous disposons d'un releve photogrammetrique, ainsi que d'un corpus de documentation historique, iconographique, et des interventions de restauration notable.
L'informatisation des donnees des releves represente un premier objectif pour les etapes successives de reunion d'archives organisees et operdeferences.
Les materiale heterogenes (images, photographiques et iconographiques, dessins provenant de resttitution photogrammetrique de analytique, textes d'archive descripts et relationnels) peuvent etre informatises par Fintermedaire de diverses procedures: numerisation ou pixelisation, en ce qui concerne les dessins et les releves deja existants, pixellisation pour les images photographiques et les estampes memorisation en mode alphanumerique pour les textes. etc.
Comme nous favons deja decrit precedemment, le releve photogrammetrique est comme reference unique pour tous les documents avec les codes et des le lecture de l'edifice dans son ensemble jusqu's es moindres details.
Le travail se poursuit pour verifier d'une part les caracteristiques du systeme soumis aux requetes de recherche (gestion des projets d'intervention de restauration connnaissance du patrimoine archicturent).
d'autre par son extension aux autres edifices architectoniques, en analysant le parcours methodologique amorce dans le cas etudie.
## 5 Bibliographie
[PERSON], 1990. Limage photogrammetrique de synthese. Proceedings of ISPRS Symposium Commission V - Close-Range Photogrammetry Meets Machine Vision, Zurigo, 182-189.
[PERSON], [PERSON], [PERSON], 1990. Photogrammetric system and cost analysis for architectural and archeological surveys. Proceedings of ISPRS Symposium Commission V - Close-Range Photogrammetry Meets Machine Vision, Zurigo, 51-58.
[PERSON], [PERSON], 1992. Fotogrammetria analitica e sistemi informativi architecttoni (S.I.A.): un esempio di applicazione alla Loggia del Lionello di Udine. Recuperare, 8, Milano, 730-737.
[PERSON], 1994. Programme PAROS. Culture & Recherche, n.48, Paris, 2.
[PERSON], [PERSON], [PERSON] [PERSON], 1995. Il progetto P.A.R.O.S. come mezzo di conoscenza, gestone e rappresentazione del patrimonio archictentozo. Att del l Colloqui Internazionale - \"La fologrammetria per il restauro e la storia - Tecniche analitice e digital\", Milano, 181-184.
Figure 2: Villa Foscari “La Malcontenta” (Venise - Italie). Restitution analytique de la facade vers le canal
|
isprs
|
Matra espace: Un systeme intelligent autonome pour eureca
| null |
https://doi.org/10.1016/0045-8732(87)90037-4
| 1,987
|
CC-BY
|
isprs/1ee161e0_8920_46de_998e_98254088d22c.md
|
# Spatiotemporal Analysis of Indian MEGA cities
[PERSON]
[PERSON]
Julius-Maximilians-University Wurzburg, Geographic Institute, Am Hubland, 97074 Wurzburg
[PERSON]
German Remote Sensing Data Center (DFD), German Aerospace Center (DLR), D-82234 Wessling; -
[EMAIL_ADDRESS]
[PERSON]
German Remote Sensing Data Center (DFD), German Aerospace Center (DLR), D-82234 Wessling; -
[EMAIL_ADDRESS]
[PERSON]
German Remote Sensing Data Center (DFD), German Aerospace Center (DLR), D-82234 Wessling; -
[EMAIL_ADDRESS]
[PERSON]
German Remote Sensing Data Center (DFD), German Aerospace Center (DLR), D-82234 Wessling; -
[EMAIL_ADDRESS]
###### Abstract
Urbanization is arguably the most dramatic form of highly irreversible land transformation. While urbanization is a worldwide phenomenon, it is exceptionally dynamic in India, where unprecedented urban growth rates have occurred over the last 30 years. In this uncontrolled explosive situation city planning lacks of data and information to measure, monitor, understand urban sprawl processes. The analysis of such changes has become an important use of multitemporal remote sensing data. Using a time-series of Landsat data to classify the urban footprints since the 1970s enables detection of temporal and spatial urban sprawl, redesignification and urban development in the explosively growing large urban agglomerations of the mega cities Mumbai, Delhi and Kolkata in India. Combining gradient analysis with landscape metrics the spatiotemporal pattern of urbanization are quantified. Spatial parameters are the absolute areal growth, urbanization rates, built-up densities, landscape shape index, edge density, patch density, or largest patch index. The study aims to detect analogies and differences for spatial growth in Indian mega cities, cities in the same cultural area at about the same development stage regarding absolute population. The results paint a characteristic picture of spatial pattern gradients and landscape metrics of the three Indian mega cities.
Urban Remote Sensing, Classification, Change Detection, Gradient Analysis, Spatial Metrics, Indian Megacities, Landsat
## 1 Introduction
Over the last 50 years, the world has faced dramatic growth of its urban population. The number of so-called mega cities increased in the period from 1975 until today from 4 to 22, mostly in less developed regions ([PERSON], 2005). Especially Indian mega cities are among the most dynamic regions on the planet. During the last 50 years the population of India (today 1.2 billion) has grown two and a half times, but the urban population has grown nearly five times. The number of Indian mega cities will double from the current three (Mumbai, Delhi and Kolkata) to six by the year 2021 (new additions will be Bangalore, Chennai and Hyderabad), when India will have the largest concentration of mega cities in the world ([PERSON], 2001).
Intra-city migration from smaller to bigger cities is continuing along with the migration from rural to urban areas besides an enormous natural population pressure. This explosive urbanization resulting in unplanned and uncontrolled growth of large cities has had dramatic negative effects on urban dweller and their environment. Cities are facing serious shortage of power, water, sewerage, developed land, housing, transportation, and communication mixed with dramatic pollution, poor public health or educational standards, unemployment and poverty. Thus, understanding and monitoring past and current urbanization processes is the basis for future predictions and preparedness, and thus for sustainable urban planning. This study focuses on the spatiotemporal urban growth of the current Indian mega cities, Mumbai, Delhi and Kolkata, assumimply the urban agglomerations at the furthest urban development stage as basis to analyse trends to be due in incipient mega cities in India.
For many decades, in some cases centuries, cities have been spreading ([PERSON] et al., 1998). Research in the description, mapping, characterization, measuring, understanding and explanation of form, morphology, and evolution of urban environments has a long tradition in geographic research and planning. The classic theories of urban morphology define urban pattern as concentric rings with different land use types ([PERSON], 1925), as sectors, where the transportation network modifies the form of the concentric zone pattern ([PERSON], 1939), and the multiple nuclei theory having a patchy urban form with multiple centers of specialized land use ([PERSON] and [PERSON], 1945). Since the 1960s various theories were used to characterize urban form: for example fractals ([PERSON] and [PERSON] 1989), cellular automata ([PERSON], 1979), dissipative structure theory ([PERSON] & [PERSON], 1979), or landscape metrics ([PERSON] et al, 1988).
In general, the application, performance and outputs analysing and comparing the development of urban form of various cities depend strongly on the data available for parameterization ([PERSON] and [PERSON], 2000). Remote sensing techniques have already shown their value in mapping urban areas at various scales, and as data sources for the analysis of urban land cover change ([PERSON] et al, 2001; [PERSON] and [PERSON] 2001; [PERSON] et al, 2002). Recent research has used remotely sensed images to quantitatively describe the spatial structure of urban environments and characterize patterns of urban morphology. Critical in the description, analysis, and modelling of urban form and its changes are spatial metrics ([PERSON] et al., 2003). These indices can be used to objectively quantify the structure and pattern of an urban environment. Most of the studies on urban landscape metrics focus on a single city ([PERSON] and [PERSON], 2002; [PERSON] et al., 2002, 2003; [PERSON] et al, 2004). However, there are few studies like these in developing countries which compare cities at about the same development stage in the same cultural area ([PERSON] et al., 2005).
In this study a spatiotemporal analysis using a time series of Landsat data aims at the detection of the urban footprints and the changes in the three current Indian mega cities, Mumbai, Delhiand Kolkata, since the 1970s. The land-cover classification is based on an object-oriented hierarchical classification approach ([PERSON], 2008). Using parameters like urban growth rates, built-up densities, the spatial form, direction of growth, or landscape metrics like e. g. the shape index or patch density enable the identification of similarities or dissimilarities in urban characteristics of the Indian mega cities. We aimed to address several specific questions on their spatiotemporal development:
* What are spatial and temporal patterns of urban change?
* Is there analogy of patterns in shape, size, growth, gradients and metrics in Indian mega cities, thus in cities at about the same development stage in the same cultural area?
* Does the spatial configuration of Indian mega cities converge toward a standard form?
The idea behind this approach is to learn from the characteristics from current mega cities, to understand the emerging growth pattern to support planning processes and formulate policies to guide or redirect spatial growth in incipient Indian mega cities, like Hyderabad, Chennai or Bangalore.
## 2 Study Areas and Data
Our study areas are the three current mega cities in India, Mumbai (Bombay), Delhi and Kolkata (Calcutta), who are spatially distributed on the large subcontinent. Mumbai is located at the west coast on seven now-merged islands in the state Maharshtra, Delhi, located in northern India on the flood plains of river Yamuna, has the status as the National Capital Territory. Kolkata, the capital of the Indian state West Bengal, is located in eastern India in the Ganghes Delta in a flat surrounding at the Hooghly River (Figure 1).
Referring to the United Nations (UN, 2005), in the year 2005 approximately 18,2 million people were living in Mumbai, 15 million people were living in Delhi and 14,3 million people were living in India's third mega city Kolkata. The dramatic pace of urbanization shows Mumbai (3.1%) and Delhi (4.1%) among the highest population growth rates of mega cities worldwide, while in contrast Kolkata's pace slows down to 1,7 % (UN, 2005). Figure 2 shows the dramatic population development of Mumbai, Delhi and Kolkata since 1970 and a prognosis until 2015. The mega cities more or less quadrupled their population, and are expected to grow even faster intensifying the urban crisis of the largest Indian urban agglomerations.
For a spatiotemporal analysis of the large urban areas of the Indian mega cities remote sensing data proved to be an independent and cost-effective data basis. The choice of data predominantly depends on technical aspects. These are represented by the following determinants:
* Extent of the test sites
* Number of aimed land cover classes and their spatial differentiation potential
* Length of study period
* Requirements for accuracy of thematic classification ([PERSON], 2001).
The Landsat program represents a series of earth observation satellites that have been continuously available since 1972. Therefore this system allows for an analysis of extended time series. It started with the Multi-Spectral-Scanner (MSS) featuring a geometric resolution of 79 meters and a spectral resolution of four spectral bands (green, red, two near infrared bands). Since 1982 the Thematic Mapper (TM) has operated with 30 meter geometric resolution and seven spectral bands. Since 1999 the Enhanced Thematic Mapper (ETM) has operated with an additional panchromatic band and 15 meter geometric resolution. Having continuous, constant spectral bandwidths guarantees the comparability of the different sensors. With its field of view of 185 km the satellite is able to survey the large metropolitan areas of the study sites. Measurement of areal coverage and spatial distribution are both needed to describe the morphology of an urban area adequately ([PERSON] et al. 1998). The chosen level of description with Landsat features is not flooded with microscopic detail, but incorporates specific features of the urban system. In return, the requirements for the differentiation of classes are limited to the classification of built-up and non-built-up areas. Also the accuracy of classifications is limited due to coarse geometric resolution and therefore many \"mixed-pixels\" containing information on various thematic classes. This limited differentiation and accuracy potential nevertheless enables monitoring and detection of the correct dimension of spatial and temporal changes, of urban sprawd and of the spatial direction of urban development. For the analysis Landsat data for Mumbai were available for the years 1973, 1991 and 2001, for Kolkata for the years 1977, 1990 and 2000, and for Delhi for the years 1977 and 1999. Figure 4 shows as one example false
Figure 1: Geographic location of Indian mega cites
Figure 2: Population growth in Indian mega cities since the 1970s
colour Landsat imagery from the coastal region of the large urbanized areas of mega city Mumbai in 2001.
## 3 Change Detection Using Remote Sensing Data and Methods
A land cover classification extracting the classes built-up areas, bare soil, vegetation, and water was performed separately on all images. The main goal is to identify the urban built-up areas to measure the changes of the urban extension over the time interval. For that purpose the classification methodology is based on an object-oriented hierarchical approach ([PERSON] et al., 2007; [PERSON] 2008; [PERSON], 2007). The object-oriented methodology was used to combine spectral features with shape, neighbourhood and texture features.
Due to a large amount of mixed spectral information in such a coarse ground resolution the accuracy is limited. But for the requirement of mapping the city footprint, its spatial dimension and the spatial developments over the years, the Landsat images provide enough information for an assessment of urban change. An accuracy assessment has been performed by a scenes.
Post classification comparison was found to be the most accurate procedure and presented the advantage of indicating the nature of the changes ([PERSON], 1999). A comparative analysis of land cover classifications for the available times performed independently was therefore implemented to monitor and analyse the land cover changes in the metropolitan areas of Mumbai, Delhi and Kolkata. Pixelwise change detection was implemented checking the land cover classes individually of the available years. Figure 4 shows the result of the change detection for all three Indian mega cities, displaying the urban footprints and their spatiotemporal evolution since the 1970s.
The result of the change detection shows three very different urban footprints of the Indian mega cities. While the urban footprint of Mumbai is determined by the coastal and hilly orography, the urban footprints of Delhi and Kolkata are not subject to orographic restrictions.
The peninsula of Mumbai forces the urbanized areas on available land, with an axial growth in the outskirts caused by transportation networks and hilly barriers. The polycentric structure and development of satellite cities in the 1970s steadily increased due to land shortage in the urban center and dramatic population pressure. The result is a complex urban footprint, spatially polycentric with axial growth lines, a large urban core and a dispersed urban-rural fringe. The urban footprint of Delhi, only slightly influenced by orography, results in a classic concentric urban ring-shaped growth with axial growth sectors caused by transportation networks. The polycentric structure of the 1970s shows coalescence between the satellite cities and the urban core today. Growth is predominantly laminar and clustered, with dispersion solely in the peripheral catchment area of Delhi. Kolkata shows an oval urban footprint along the Hooghly River not influenced by orographic barriers. The monocentric spatial structure shows oval-shaped and laminar growth with little dispersion.
\begin{table}
\begin{tabular}{|l||c|c|c|} \hline & **Landsat MSS** & **Landsat TM** & **Landsat ETM** \\ \hline \hline
**Mumbai** & 87.0 \% & 90.4 \% & 90.8 \% \\
**Delhi** & 89.4 \% & - & 91.8 \% \\ \hline
**Kolkata** & 90.6 \% & 90.8 \% & 91.6 \% \\ \hline \end{tabular}
\end{table}
Table 1: Accuracy assessment of the classification of Landsat data
Figure 3: False-Colour (Bands 1,2,4) Landsat imagery from the metropolitan area of Mumbai
## 4 Spatiotemporal analysis of the urban patterns of indian megacities
Urban structure is very much scale-dependent. This study uses Landsat data for large-area analysis to survey urban growth and its spatiotemporal form based on built-up and non built-up areas. For a highly detailed structural analysis of the heterogeneous inner structures of urban morphology satellite data with higher geometric resolution (f. e. Ikonos or Quickbird), but for it with small swath widths limiting area-wide analysis of mega cities, are needed.
Urbanization may be linked with details related to topography, transportation, land use, social structure and economic type, but is generally related to demography and economy in a city ([PERSON] et al., 2002). In the following, urbanization is analysed by spatial urban form and its changes over time. We chose parameters like areal growth, urbanization rates, or built-up densities for a spatiotemporal gradient analysis of urbanized areas. In addition we chose landscape metrics (or spatial metrics) like the SHAPE index, patch density and largest patch index as quantitative indices to describe structures and pattern of the mega city landscapes. In general, spatial metrics can be defined as quantitative and aggregate measurements derived from digital analysis of thematic-categorical maps showing spatial heterogeneity at a specific scale and resolution ([PERSON] et al., 2002; [PERSON], et al, 2003). The main idea is to learn mechanism of the complex process of spatial urban growth by finding analogy and differences between cities past development.
### Areal growth and urbanization rates
The physical process of urban land-cover change is most commonly described as either a change in absolute area of urban space (a measure of extent) or the pace at which non-urban land is converted to urban uses (a measure of rate) ([PERSON] et al. 2005).
The absolute growth of urbanized areas shows Mumbai and Kolkata at about the same gain over time. Significantly differing is Delhi which was at about the same level than Mumbai and Kolkata in the 1970s. The capital city of India shows explosive spatial growth with today almost a double-sized urbanized area in comparison with the two other mega cities. Figure 5 displays the growth gradient, resulting in just fewer than 400 km\({}^{2}\) urbanized areas in Mumbai and Kolkata and approximately 750 km\({}^{2}\) urbanized areas in Delhi. The latter more or less tripled their urbanized areas since the beginning of the 1970s, in the same period of time Delhi's urbanized areas grew 4-5 times its size.
Figure 6 shows as one example the urbanization rates and their spatial distribution in Mumbai. In the time period of 1973 until 1991 redensification processes are detected at the city center, while immense urban sprawal with rates up to 100 % is detected on the axial transport lines as well as in the subcenters around the urban core. From 1991 until 2001 redensification processes almost stop in the urban center, while urban sprawal at the subcenters and satellite cities as well as along the axial transportation lines takes place. Thus, an increasing urbanization gradient is detected with distance to the urban core showing a relocation of the main urban growth to the edges of the city. A very similar trend is detected in both other mega cities, but due to no orographic barriers a monocentric ring-shaped growth evolved.
Figure 4: Change detection of urbanized areas in Mumbai, Delhi and Kolkata since the 1970s
Figure 5: Areal growth of urbanized areas of the Indian mega cities, Mumbai, Delhi and KolkataUsing artificial concentric rings, urbanization rates with respect to their location are calculated for various spatial zones. The zoning aims at a standardized analysis of spatial gradients for the various urban patterns of the study sites. The center is calculated using a 5 km circle (zone 1), while zone 2 entails a ring in 5 - 10 km distance, zone 3 in 10-20 km distance, zone 4 in 20-30 km distance, zone 5 in 30-40 km distance and eventually zone 6 in 40-50 km distance to the particular city center. Figure 7 shows the spatial gradients of urbanization rates for all three mega cities.
The gradients of the urbanization curves are basically similar for all three mega cities, with relatively low urbanization rates in the center (zone 1 & 2), and an immense increase to the urban fringe, and eventually a decrease to the peripheral areas. While in Delhi and Kolkata urbanization takes place mainly in zone 3 and 4, a result from their classic ring-shaped growth, the most dramatic urbanization in Mumbai is relocated to zone 5, due to shortage of space at the peninsula. With norographic barriers Delhi and Kolkata enable concentric sprawl reflected in the climax of the urbanization curves at the current urban fringe in zone 4. In dependency of built-up densities at those zones, urbanization rates will stay high or will be relocated to open spaces and rural areas of the more peripheral zones. In Mumbai urbanization in zones 1-4 is much lower due to shortage of space, but results in explosive rates in the more peripheral zones 5 and 6, where the geographic location does not limit urbanization processes.
### Built-up densities
Built-up density is a measure to characterize spatial urban pattern and structure. Densities vary substantially from city to city and from the urban center to periphery areas. Using the same artificial concentric rings, built-up density is calculated for the zones 1-4. Without consideration of the water body, the ratio between the areas in the circles to the areas classified as built-up from the Landsat data results in the built-up density of the particular zone. Figure 8 shows the temporal and spatial distribution and development of built-up densities of the three Indian mega cities.
Figure 8: Cumulative spatiotemporal analysis of built-up densities
Figure 6: Spatial distribution of urbanization rates [%] in
Figure 7: Spatiotemporal urbanization rates [%] since the 1970s
Mumbai and Kolkata show the highest built-up densities in zone 1 (center) with little redensification since the 1970s. Coming from already high built-up densities with low land availability or open spaces a saturation effect at 80 % becomes apparent. With decreasing growth rates zone 2 shows very similar effects in both mega cities at around 55 %. Zone 3 and zone 4 show clearly a decreasing built-up density gradient converging to the urban-rural fringe. Indeed, the complex urban footprints still show locally high built-up densities in this distance, like for example to the north of oval-shaped Kolkata.
The situation in Delhi is slightly different with the highest built-up density in zone 2 of about 63 %. The difference in zone 1 is caused by the double structure of Old Delhi and New Delhi spatially next to each other. New Delhi, a planed and structured center, decreases the built-up density of the typical Indian structure of Old Delhi which reaches around 90 % built-up density in zone 1. The built-up density of zone 2-4 is higher than in the two other study sites due to classic concentric urban growth, but the urban-rural gradient decreases equivalent to Mumbai and Kolkata.
With this exception of central Delhi, all three Indian mega cities clearly show a decreasing and similar built-up density gradient with distance to the main urban center, although their urban footprints differ significantly.
### Landscape shape index (LSI)
In the following, the landscape metrics are calculated on the complete urbanized areas, not specifying spatial urban zones. The Landscape Shape Index (LSI) provides a standardized measure of the perimeter length of all patches of one land cover type (here: urbanized areas) in the landscape ([PERSON] et al., 2002; [PERSON] et al., 2005). If the urbanized area is composed by simple geometric rectangles, the LSI will be small, approaching 1.0. If the landscape contains dispersed patches with complex and convoluted shapes the LSI will be large. Table 2 shows the spatiotemporal results of the LSI calculation of the whole study sites.
The urban footprints of all three Indian mega cities differ significantly, but the temporal evolution of the LSI is very parallel. The rapid urban sprawl apparently involves a dramatic increase in urban complexity. Even the polycentric urban growth of Mumbai does not reflect divergent effects in comparison with monocentric spatial forms of growth in Delhi and Kolkata.
### Patch density (PD)
The patch density (PD) which is the number of urban patches is a measure of discrete urban areas in the landscape and is expected to increase during periods of rapid urban nuclei development, but may decrease if urban areas expand and merge into continuous urban fabric ([PERSON] et al., 2002; [PERSON] et al., 2005).
The three mega cities show significant differences in their patch density. While Mumbai and Kolkata had a similar PD in the 1970s, their PD development differs from there. Kolkata's growth type shows a highly dispersed urban fabric, while Mumbai' PD increased slower with even a decreasing trend after a climax around 1990. These differing trends emphasize coalescence and redensification even in the outskirts of the urban core as well as in the satellite cities for Mumbai, while in Kolkata the ring-shaped growth takes place with punctual, dispersed patches. In contrast to Mumbai and Kolkata, the PD of Delhi stays constantly at a low level highlighting a laminar coalescence and a laminar urban footprint.
### Largest Patch Index (LPI)
The Largest patch index (LPI) gives the proportion of total area occupied by the largest patch ([PERSON] et al, 2002). It is a measure that represents the separation of the urban landscape into smaller individual patches versus a dominant urban core. Table 4 shows the temporal characteristics of the LPI in Mumbai, Delhi and Kolkata.
The increase in all three Indian mega cities represents the spatial growth of the urban core and the increasing coalescence of individual urban patches to the central urban area. Delhi' laminar growth type shows a significantly high increase, while Mumbai and Kolkata once more show a parallel evolution of the landscape metric. All three mega cities show an increasing LPI gradient, highlighting redensification and coalescence of as analogue urbanization process at all urban cores.
## 5 Results and Conclusions
The study has demonstrated that urbanization and its spatiotemporal form, pattern and structure can be quantified and compared across cities using a combination of gradient analysis and spatial metrics. Landsat data proved to be an independent, area-wide, and with respect to the limited geometric resolution, an adequate data source for the analysis of fast changing and large areas of Indian mega cities. The results address the three questions we defined earlier in the introduction. 1) What are spatial and temporal patterns of urban change? 2) Is there analogy of patterns in shape, size, growth, gradients and metrics in Indian megacities, thus in cities at about the same development stage in the same cultural area? 3) Does the spatial configuration of Indian mega cities converge toward a standard form?
\begin{table}
\begin{tabular}{|c||c|c|c|} \hline
**LPI** & **\(\pi\)1975** & **\(\pi\)1990** & **\(\pi\)2000** \\ \hline \hline
**Mumbai** & 7,07 & 16,51 & 22,00 \\ \hline
**Delhi** & 7,61 & - & 22,62 \\ \hline
**Kolkata** & **7,70** & 14,30 & 21,12 \\ \hline \end{tabular}
\end{table}
Table 2: Spatiotemporal results of the LSI
\begin{table}
\begin{tabular}{|l||c|c|c|} \hline
**Patch density** & **\(\pi\)1975** & **\(\pi\)1990** & **\(\pi\)2000** \\ \hline \hline
**Mumbai** & 9,74 & 19,09 & 15,01 \\ \hline
**Delhi** & 3,89 & - & 4,12 \\ \hline
**Kolkata** & 10,78 & 27,25 & 44,99 \\ \hline \end{tabular}
\end{table}
Table 3: Spatiotemporal results of the PDThe study shows that spatiotemporal patterns of current Indian mega cities growth are reflected in decreasing redensification processes and a saturation effect for built-up densities around 80 % in the centers. It becomes apparent that the decreasing built-up density gradient from center to urban fringes comes along with increasing urbanization rates or relocation of urbanization to satellite cities. Independent from the cities footprint, explosive urban growth increases the spatial complexity.
Urban growth in India may take various spatial forms, however, many parameters in Mumbai, Delhi and Kolkata showed similar results. Especially Mumbai and Kolkata emerged as a very similar growth type, with similar areal growth, corresponding in the spatiotemporal urbanization and built-up density gradients, identical spatial complexity as well as the ratio of the urban core to dispersed patches. Both cities only differ in the patch density, showing highly dispersed growth in Kolkata compared to Mumbai. Delhi differs through an enormous areal growth, a coalesced urban center, with laminar growth resulting in a dominant urban core. Still, built-up density gradients and urbanization gradients correspond to Mumbai and Kolkata, as well as the increasing complexity.
Due to different urban orographic conditions in combination with socio-economic and political impacts Indian mega cities due not converge toward a standard form. Contrasts include poly- versus moneometric spatial growth, absolute areal growth, and the patch density. But nevertheless aspects of spatial urban growth proceeded very similar.
The time series of gradient analysis and landscape metrics is important for describing, understanding and monitoring the spatial configuration of urban growth. A comparative analysis is crucial for urban growth trajectories across cities. Measuring the development stages of the three Indian mega cities, conclusions on incipient mega cities in the same cultural area like Hyderabad, Bangalore or Chennai may support planning, future modelling, and thus decision-making for sustainable and energy-efficient urban futures.
## References
* [1]
* [2][PERSON], and [PERSON] (1979): A dynamic model of urban growth: II. Journal Social Biol. Struct. 2: 269-278.
* [3]
* [4][PERSON], [PERSON], [PERSON]. (1998): Urban Spatial Structure, _Journal of Economic Literature_, American Economic Association, vol. 36(3), pp. 1426-1464.
* [5]
* [7]
* [8][PERSON] & [PERSON] (2001): Predicting temporal patterns in urban development from remote imagery. In: [PERSON], [PERSON] and [PERSON] (Hrsg.), _Remote Sensing and urban analysis_. S. 185-204. London. Taylor and Francis.
* [9]
* [10][PERSON] (2007): Raum-zeitliche Analyse indischer Megasidite mit Landsat-Daten. Bachelor-Thesis. Institute for Geography, Friedrich-Schiller-University Jena. p. 82.
* [11]
* [12][PERSON] (1925): The growth of the city. An introduction to a research project. In: [PERSON], [PERSON], and [PERSON] (eds.), The City. University of Chicago Press, Chicago, pp. 47-62.
* [13]
* [14][PERSON], [PERSON] and [PERSON] (2001): Remote Sensing and urban analysis. London. Taylor and Francis.
* [15]
* [17]
* [18][PERSON], [PERSON], [PERSON] (2002): The use of remote sensing and landscape metrics to describe structures and changes in urban land uses. In: Environment and Planning A, volume 34, pp. 1443-1458.
* [19]
* [20][PERSON], [PERSON], [PERSON] (2003): The spatiotemporal form of urban growth: measurement, analysis and modeling. Remote Sensing of Environment 86, pp. 286-302.
* [21]
* [22][PERSON] (1939): The structure and growth of residential neighborhoods in American Cities. Federal Housing Administration, Washington DC, USA.
* [23]
* [24][PERSON] and [PERSON] (2000): On the measurement and generalization of urban form. In: Environment and Planning A,32, 473-488.
* [25]
* [26][PERSON], [PERSON] & [PERSON] (2003): Simulating spatial urban expansion based on physical process. In: Landscape and Urban Planning, Vol. 64, pp. 67-76.
* [27]
* [28][PERSON], [PERSON] (2002): A gradient analysis of urban landscape pattern: a case study from the Phoenix metropolitan region, Arizona, USA. In: landscape Ecology 17: 327-339. Kluwer Academic Publishers.
* [29]
* [30][PERSON] (1999): Monitoring land-cover changes: a comparison of change detection techniques. _International Journal of Remote Sensing_, vol. 20, No. 1, pp. 139-152.
* [31]
* [32][PERSON], [PERSON], [PERSON] and [PERSON]. (2002): FRAGSTATS: spatial pattern analysis programm for categorical maps. Computer software produced by the authors at the University of Massachusetts, Amherst.
* [33]
* Megarisiken. Trends und Herausforderungen fur Versicherung und Risikomanagement. www.munichre.com/publications/302-04270_de.pdf
* [35]
* [36][PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] & [PERSON] (1988): Indices of landscape pattern. In Landscape Ecology, 1, 153-162.
* [37]
* [38][PERSON] (2001):,,Monitoring der Verstidertung im Grossbraum Istanbul mit den Methoden der Fernerkundung und der Versuch einer raumlich-statistischen Modellierung\", PhD Thesis, Gottingen.
* [39]
[PERSON], [PERSON] (1998):.Estimation of mega-city growth\", _Applied Geography_, 18/1, pp. 69-82.
* [PERSON] _et al._ (2005) [PERSON], [PERSON] [PERSON] (2005): Quantifying spatiotemporal patterns of urban land-use change in four cities of China with a time series of landscape metrics. In: Landscape Ecology 20: 871-888.
* [PERSON] _et al._ (2007) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], [PERSON] (2007): A multi-scale urban analysis of the Hyderabad Metropolitan area using remote sensing and GIS. In: Urban Remote Sensing Joint Event, Paris, France. p.6.
* [PERSON] (2008) [PERSON] (2008): Vulnerabilitatsabschatzung der erdbebengefahrdeten Megacity Istanbul mit Methoden der Fernerkundung. PhD Thesis. University of Wurzburg. p. 174.
* United Nations (2005): World Urbanization Prospects, The 2005 Revision, New York.
* [PERSON] _et al._ (2004) [PERSON], [PERSON], [PERSON], [PERSON] & [PERSON] (2004): A GIS-based gradient analysis of urban landscape pattern of Shanghai metropolitan area, China. In: Landscape Urban Plan. 69, 1-16.
|
isprs
|
Analysis of spatiotemporal evolution and influencing factors of water poverty and ecological resilience coupling coordination in Chinese mega cities
|
haiqi zhang
|
https://doi.org/10.21203/rs.3.rs-4534232/v1
| 2,024
|
CC-BY
|
isprs/ff1f2f2f_ccd3_44cb_a535_1448244dd668.md
|
# A Critical Review of Automated Photogrammetric Processing of Large Datasets
[PERSON]\({}^{1}\)
[PERSON]\({}^{1}\)
[PERSON]\({}^{1}\)
[PERSON]\({}^{1}\)
\({}^{1}\)3D Optical Metrology (3 DOM) unit Bruno Kessler Foundation (FBK), Trento, Italy
\(<\)[PERSON]\(>\)\(<\)[PERSON]\(>\)\(<\)[PERSON]\(>\)\(<\)[PERSON]\(>\)@fbk.eu [http://3 dom.fbk.eu](http://3 dom.fbk.eu)
###### Abstract
The paper reports some comparisons between commercial software able to automatically process image datasets for 3D reconstruction purposes. The main aspects investigated in the work are the capability to correctly orient large sets of image of complex environments, the metric quality of the results, replicability and redundancy. Different datasets are employed, each one featuring a diverse number of images, GSDs at cm and mm resolutions, and ground truth information to perform statistical analyses of the 3D results. A summary of (photogrammetric) terms is also provided, in order to provide rigorous terms of reference for comparisons and critical analyses.
Footnote †: footnote]
## 1 Introduction
The availability of fully automated photogrammetric software allows just about anyone with a camera, even low-quality mobile phones ([PERSON] et al., 2013; [PERSON] et al., 2017), to generate 3D models for various purposes. Researchers and practitioners employ nowadays photogrammetry as a valuable, powerful and cheap alternative to active sensors for textured 3D reconstruction of heritage scenarios, museum artefacts, cities, landscapes, consumer objects, etc. However, the majority of image-based users is often unaware of strengths and weaknesses of the used methodology and software, employing it much like a black-box where they can drop photographs in one end and retrieve a (hopefully completed 3D models on the other end. Previous works ([PERSON] and [PERSON], 2012; [PERSON] et al., 2012; [PERSON] et al., 2014) demonstrated that automation in image-based methods is very efficient in most heritage projects, with great potentials, although some open research issues still exist ([PERSON] and [PERSON], 2014; [PERSON] et al., 2014; [PERSON] et al., 2016; [PERSON] and [PERSON], 2017). The quality of automatically derived 3D point clouds or surface models is normally satisfactory although no standard quality analysis tools are generally implemented and used to evaluate the value of the achieved (3D) products. Moreover, not all software solutions allow a rigorous scaling & geo-referencing procedure and there is generally a lack of standard terms when reporting the results.
### State-of-the-art in automated image-based 3D reconstruction
The image-based processing pipeline, based on the integration of photogrammetric and computer vision algorithms, has become in the last years a powerful and valuable approach for 3D reconstruction purposes. If in the beginning of 2000's many researchers and users moved their attention and interest to laser scanning technologies, since few years an opposite trend is happening and the image-based approach is once again very commonly used. Indeed, it generally ensures sufficient automation, low cost, efficient results and ease of use, even for non-expert users. The recent progresses were achieved in all core components of the image-based processing pipeline: image preprocessing ([PERSON] et al., 2015), keypoints extraction ([PERSON] et al., 2015), bundle adjustment ([PERSON] and [PERSON], 2016) and dense points clouds generation ([PERSON] et al., 2014). These progresses have led to fully automated methodologies (normally called Structure-from-Motion - SfM and Multi-View Stereo - MVS) able to process large image datasets and deliver 3D (both sparse and dense) results with a level of detail and precision variable according to the applications ([PERSON] et al., 2010; [PERSON] et al., 2013). Particularly in terrestrial and UAV applications, the level of automation is reaching very high standards and it is increasing the impression that few randomly acquired images - even found on the Internet ([PERSON] et al., 2015) - and a black-box tool are sufficient to produce a professional 3D point cloud or textured 3D model. However, when it comes to applications different from web visualization or quick 3D reconstructions, end-users are still missing a valuable solution for metric applications where results can be deeply analysed in terms of accuracy, precision and reliability. As a consequence, algorithms and methods could be understated or overrated and weakness in dataset could be missed.
### The trend and risk
The ease of use of many commercial photogrammetric software allows any user to take some photographs, blindly load them into the package, push a button and enjoy the obtained 3D model. This is compelling, but dangerous. Without sufficient knowledge of the processes and the software being used, non-expert users can potentially invest greater confidence in the results of their work than may be warranted. Nowadays many conferences are filled with screenshots of photogrammetric models and cameras floating over a dense point cloud. Nonetheless object distortions and deformations, scaling problems and non-metric products are very commonly presented but not understood or investigated. Therefore it is imperative that users move beyond black-box approaches of photogrammetric (or SfM/MVS) tools and begin to understand the importance of acquisition principles, data processing algorithms and standard metrics to describe the quality of results and truly quantify the value of a 3D documentation. A proper understanding of the theoretical background of algorithms running in software applications is thus advisable in order to obtain reliable results and metric products. Leaving the black-box approach behind will ensure a better usability of the results, long-lasting quality data, transferability of the methodology and a better diffusion of 3D technologies in the heritage field.
### Paper objectives
This paper wants to critically evaluate the performances of three commercial packages (Agisoft PhotoScan, Pix4D Pix4 Dmapper Pro and Capturing Reality RealityCapture) commonly used in the heritage community for automated 3D reconstruction of scenes. Different large datasets are employed (Table 1), each one featuring a diverse number of images, varying GSDs and some ground truth information to perform statistical analyses of the results. The null hypothesis is assuming that given the same processing parameters (number of extracted keypoints, maximum reprojection error, same GCPs, etc.), each software would produce a very similar result without any significant variation from the others. However, since each software offers a slightly different set of parameters, different terminology as well as different approaches for the image orientation and dense matching procedures, there will be some variability between the different processing results. In the paper, we are not taking into account the generation of a mesh or texturing, as the work assumes that the best measure of performances is the result of image orientation and dense matching procedures.
Due to a lack of output standards, it is generally difficult to present comparisons. However, in order to understand differences, strengths and weaknesses, we will focus on:
* [noitemsep,topsep=0 pt]
* orientation results, in terms of number of oriented cameras, theoretical precision of object points, RMS on check points (CPs), redundancy/multiplicity of 3D points;
* dense point clouds: as we are familiar with each of the datasets presented here, challenging areas known to be particularly problematic for photogrammetry are analysed.
Although recent benchmarks and software evaluations exist ([PERSON] et al., 2015; [PERSON] et al., 2016; [PERSON] et al., 2017; [PERSON] et al., 2017), the paper focuses on more complex environments, surveyed with different platforms / cameras and comparison metrics are given following a standard terminology ([PERSON] et al., 2014; [PERSON], 2016).
## 2 Adop terminology
The fusion of photogrammetric, computer vision and robotics methods has led from one side to open and commercial solutions able to automatically process large sets of unordered images but, from the other side, to a misused terminology and a lack of clear meanings and measures. Although standard terms and metrics do exist, they are not always properly employed by all software packages and researchers, making the comparison of processing methodology and the understanding of delivered results a not-trivial task. In the following we report the most common terms and metrics which should be used when processing image datasets and delivering 3D sparse or dense point cloud results.
**Bundle Adjustment (BA)**: \"bundle\" refers to the set of optical rays that, according to the collinearity condition (or central perspective camera model), connect each camera projection centre, the measured image point and corresponding 3D point in object space. Therefore, BA means to 'arrange' the bundle of optical rays departing from the images and pointing to the 3D object points to iteratively, jointly and optimally reconstruct both the 3D scene and camera (interior and exterior) parameters. If interior (principal) distance and principal point coordinates) and additional parameters (radial and tangential lens distortion, affinity and shear) are also estimated, it takes the name of self-calibrating bundle adjustment ([PERSON], 2001). Classically, the BA is formulated as a non-linear least squares problem ([PERSON] et al., 1999) with all unknowns simultaneously estimated. A least squares method minimizes an objective function, being the sum of the squared residuals of the available observations (i.e. reprojection error of the image measurements). For the collinearity model, the objective function is not independent from the model parameters and it is practical to use linear equations. Linearization implies that approximate values for all parameters are known and the most optimal values are computed in an iterative framework so that with each iteration the estimates are updated and hopefully closer to the real solution. Initial approximations of unknown parameters are normally computed with a subsequent concatenation of triangulation and resection (or DLT) procedures. The existing
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline & UAV (189) / Canon EOS 550D, 18 MPx, & & & & \\
869 & 22.34,349 mm CMOS sensor, 25 mm focal length, Terrestrial (680) Nikon D3x, 24 MP, & & & \\
36x24 mm CMOS sensor, 50 mm focal length & & & & \\ \hline \end{tabular}
\end{table}
Table 1: Summary of employed datasets.
algorithms for finding a minimum of the objective function differ in how the structure and derivatives of this function are exploited ([PERSON] and [PERSON], 2006). Within the photogrammetric community the most common BA solution is the iterative Netwon's method (i.e. Gauss-Markov method) whereas in the computer vision community, Gauss-Newton or Levenberg-Marquardt are used ([PERSON] et al., 1999; [PERSON] and [PERSON], 2013). Popularity of the Newton-like methods lies in their fast convergence near the (absolute) minimum. The disadvantage is that the worse are the initial approximations of the unknowns, the more costly are the iterations and the less is the guarantee that a global minimum is reached.
Structure from motion (SIM): it is a procedure to simultaneously estimate both 3D scene's geometry (structure) and camera pose (motion) ([PERSON], 1979). If the camera is not pre-calibrated, calibration parameters can be simultaneously estimated as well ([PERSON], 2010). SIM entails two steps: a preliminary phase where 2D features are automatically detected and matched among images and then a bundle adjustment (BA) procedure to iteratively estimate all camera parameters and 3D coordinates of 2D features. The democratization of SIM started with the early self-calibrating metric reconstruction systems ([PERSON] and [PERSON], 1998; [PERSON], 1999) which served as basis for the first systems on large and unordered Internet photo collections ([PERSON] et al., 2008) and urban scenes ([PERSON] et al., 2008). Inspired by these achievements, increasingly large scale reconstruction solutions were developed for thousands, millions and hundreds of millions images ([PERSON] et al., 2010; [PERSON] et al., 2012; [PERSON] et al., 2015). A variety of SIM strategies were proposed, including incremental ([PERSON] et al., 2008; [PERSON] et al., 2009; [PERSON], 2013; [PERSON] and [PERSON], 2016), hierarchical ([PERSON] et al., 2010; [PERSON] et al., 2017) and global approaches ([PERSON] et al., 2013; [PERSON] et al., 2015). Nowadays the incremental SIM is the most popular, starting with a small seed reconstruction, then growing by adding additional images/cameras and 3D points. Nevertheless, they have various drawbacks, such as repeatability, scalability, drifting, various non-estimated cameras and high computational costs ([PERSON] et al., 2012; [PERSON] and [PERSON], 2016).
Functions describing imaging errors: deviations from the ideal central perspective camera model, due to imaging errors, are normally expressed using correction functions for the measured image coordinates. The most common functions to model systematic errors in photogrammetry were presented in [PERSON] (1976) and [PERSON] (1992), considering additional parameters to model the effects of radial and tangential distortion as well as affine errors in the image coordinate system. When an individual set of additional parameters is considered (and estimated within the self-calibrating bundle adjustment), the process is defined as 'block-invariant' self-calibration. If a set of parameters is assigned to each image, the bundle is called 'photo-variant' self-calibration ([PERSON], 1981). All available processing software applications include various variants of additional parameters but the values of these parameters are generally not directly comparable ([PERSON] and [PERSON], 2016). Indeed, they may be normalized to the focal length value and in some cases are provided as correction values, in others as proper distortion parameters.
Residuals of image coordinates: also called reprojection error, it indicates the difference between the image observation values (i.e. measured coordinates of the matched 2D points in the images) and their computed values within the adjustment process. The reprojection error is thus the Euclidean distance between a manually or automatically measured image point and the back-projected position of the corresponding 3D point in the same image. A 3D point generated only from 2 images, in an ideal case, has a reprojection error of zero. But in real processes it differs from zero due to noise in image measurements, inaccurate camera poses and unmodelled lens distortions. Nevertheless the reprojection error in image space is not an appropriate metric to evaluate the outcome of a BA, particularly when most of the 3D points are generated only from 2 images.
Standard deviation, variance, mean and median: in statistics, the standard deviation \(\sigma\) is the square root of the variance, being the variance the mean of the squared deviations of a random variable \(x\) from its mean value \(\mu\). So, the variance measures the spread, or variability, of a set of (random) numbers from their mean value \(\mu\):
\[\sigma=\sqrt{\frac{1}{n}\sum_{i=1}^{n}(x_{i}-\mu)^{2}} \tag{1}\]
\[\mu=\frac{1}{n}\sum_{i=1}^{n}x_{i} \tag{2}\]
The median is the'middle' value of a sample or population of numbers, separating it in two halves, one containing the higher values and one the lower.
Root Mean Square (RMS) and RMS Error (RMSE): while the RMS is the square root of the mean of the squared differences between the variable and its most probable value, the RMSE is computed with respect to a reference measurement, provided by an independent method. In particular, in this paper the following definitions are adopted:
* RMS of the residuals in image space, i.e. the reprojection error: \[RMS_{x}=\sqrt{\frac{1}{n}\sum_{i=1}^{n}(x_{i}-\bar{x}_{i})^{2}}\] (3) \[RMS_{y}=\sqrt{\frac{1}{n}\sum_{i=1}^{n}(x_{i}-\bar{y}_{i})^{2}}\] (4) \[RMS=\sqrt{{RMS_{x}}^{2}+RMS_{y}{}^{2}}\] (5)
where \((x_{i},y_{i})\) represent the image coordinates, i.e. the position of the matched 2D points, and \((\bar{x}_{i},\bar{y}_{i})\) are the reprojected values of the computed 3D coordinated within the adjustment procedure. While \(\sigma\) indicates the variability of a variable around its mean value, the RMS provides a measure of how much the differences, i.e. the residuals, are in average far from zero. Theoretically, \(\sigma\) and RMS should coincide when the bias has been removed ([PERSON] and [PERSON], 1999).
* RMSE computed on check points (CPs): \[RMSE_{x}=\sqrt{\frac{1}{n}\sum_{i=1}^{n}\left(x_{comp_{i}}-x_{Inf_{ i}}\right)^{2}}\] (6) \[RMSE_{r}=\sqrt{\frac{1}{n}\sum_{i=1}^{n}\left(x_{comp_{i}}-y_{Inf_{ i}}\right)^{2}}\] (7) \[RMSE_{z}=\sqrt{\frac{1}{n}\sum_{i=1}^{n}\left(x_{comp_{i}}-z_{Inf_{ i}}\right)^{2}}\] (8) \[RMSE=\sqrt{{RMS_{x}}^{2}+RMS{E_{y}}^{2}+RMS{E_{z}}^{2}}\] (9)where the subscript _Comp_ indicates the coordinates estimated from the bundle adjustment whereas _Ref_ indicates the reference values, i.e. the coordinates of check point measured with a reference surveying technique (e.g. GNSS).
**Accuracy**: it is the closeness of the result of a measurement, calculation or process to an independent, higher order reference value. It coincides with precision when measurements or samples have been filtered from gross errors, and only random errors are present. Usually, accuracy is widely used as a general term for quality ([PERSON] et al., 2014; [PERSON], 2016). Typical procedures for determining accuracy include comparison with independent reference coordinates or reference lengths. The relative accuracy represents the achieved object measurement accuracy in relation to the maximum extent of the surveyed object.
**Precision**: it provides a quantitative measure of variability of results and is indicative of random errors, following a Gaussian or normal distribution ([PERSON], 2016). It is related to concepts like reproducibility and repeatability, i.e. the ability to reproduce to a certain extent the same result under unchanged conditions. In an adjustment process, it is calculated as a standard deviation and its estimate should always be provided with a coverage factor, e.g. 1 sigma ([PERSON] et al., 2014).
**Theoretical precision of object coordinates**: it is the expected variability of estimated 3D object coordinates, resulting from the BA process and depending on the camera network (i.e. spatial distribution of the acquired images) and precision of image observations (i.e. quality of the image measurements). The precision is computed according to error propagation theory and it can be obtained from the BA covariance matrix. The theoretical precision would coincide with the accuracy of object coordinates if all the systematic errors are properly modelled.
**Reliability**: it provides a measure of how outliers (gross or systematic errors) can be detected and filtered out from a set of observations in an adjustment process. It depends on redundancy and network (images) configuration ([PERSON] et al., 2014).
**Redundancy and multiplicity**: from a formal point of view, redundancy, also known as degree of freedom, is the excess of observations (e.g. image points) with respect to the number of unknowns (e.g. 3D object coordinates) to be computed in an adjustment process (e.g. BA). For a given 3D point, the redundancy is related to the number of images where this point is visible / measured, commonly defined as multiplicity or number of intersecting optical rays. Normally, the higher the redundancy, and, consequently, the multiplicity, the better is the quality of the computed 3D point (assuming a good intersection angle). A 3D point generated only with 2 collinearity rays (multiplicity of 2 and redundancy of 1) is not contributing too much in the stability of the network and provided statistics.
**Spatial resolution and ground sample distance (GSD)**: the spatial resolution is the smallest detail which can be seen in an image or measured by a system, i.e. it's the smallest change in the quantity to be measured. The GSD is the projection of the camera pixel in the object space and is expressed in object space units. It can be seen as the smallest element that we can see and, ideally, reconstruct in 3D.
## 3 The Image Processing Pipeline
The tests performed in this research follow the typical photogrammetric workflow, consisting of the following steps.
### Identification of image correspondences
Image correspondences (or tie points) are extracted relying on the most outperforming detector and (float or binary) descriptor algorithms ([PERSON] and [PERSON], 2012; [PERSON] et al., 2014): SIFT ([PERSON], 2001) and all its variants (ASIFT, ColSIFT, PCA-SIFT, SIFT-GPU, DAISY, etc.), SURF ([PERSON] et al., 2008), FAST ([PERSON] et al., 2010), BRIFE ([PERSON] et al., 2010), ORB ([PERSON] et al., 2011), LDAHash ([PERSON] et al., 2012), MSD ([PERSON] and [PERSON], 2015), etc. Those (separated or combined) methods provide a set of keypoints coupled with a vector of information useful for the successive matching and tie point detection. The keypoint matching is normally performed with the brute force method based on the Hamming distance, a conventional L2-Norm matching strategy ([PERSON] and [PERSON], 1951) or the efficient FLANN - Fast Library for Approximate Nearest Neighbors strategy ([PERSON] and [PERSON], 2009) which is independent from the image acquisition protocol and implements a fast search structure (e.g. based on kd-trees).
### Unknowns estimation through BA
The extracted image correspondences (tie points) are used to estimate all unknown parameters (camera positions and angles, camera interior parameters, and 3D coordinates of image points) in a BA process. The Levenberg-Marquardt has proven to be one of the most successful BA solution due to its ease of implementation and its use of an effective damping strategy that gives it the ability to converge quickly from a wide range of initial guesses ([PERSON] and [PERSON], 2009).
### Dense image matching (DIM)
Once the camera poses and the sparse point cloud consisting in the 3D coordinates of triangulated tie points are recovered, a pixel-based matching algorithm ([PERSON] et al., 2012; [PERSON] and [PERSON], 2010; [PERSON], 2008; [PERSON] and [PERSON], 2006) is applied to obtain dense and colorized 3D point clouds. Stereo- or multi-view approach exist, relying on precise exterior and interior orientation parameters as well as epipolar images to constraint the search for matches ([PERSON] et al., 2014). Most of the approaches are based on the minimization of an energy function whose components are a cost function which considers the degree of the similarity among pixels and includes constraints to consider possible errors in the matching process as well as geometric discontinuity changes.
## 4 Tests and Analyses
For the sake of consistency, all datasets (Table 1) are processed using the same computer. In the datasets with available GCPs, in order to avoid multiple collimation errors, the image coordinates of the points are measured just once and then imported and used in the other packages. The tie point extraction phase is performed forcing the same number of extracted keypoints. In the self-calibration process, the same additional parameters are computed. In each test, the same image resolution is adopted for all the software applications in both the image correspondences extraction and DIM steps. All datasets employed in these tests are available to the community for further research purposes.
The versions of the employed software are the following:
* Agisoft Photoscan (PS): 1.3.1.4030 Pix4D Pix4D Mapper (Pix4D): 3.1.23
* Capturing Realitycapture (ReCap): 1.0.2.2600
It is worth mentioning that the tested version of ReCap does not provide access to the result of the DIM, being it fused with the meshing step. Therefore, the obtained 3D output corresponds to the vertices of the generated mesh model.
In the next tables report the results of the image orientations and, in two cases, for the DIM.
\begin{tabular}{|l|c|c|c|} \hline \multicolumn{4}{|c|}{**DATASET 1 - Duomo square (359 images)**} \\ \hline \multicolumn{1}{|c|}{} & \multicolumn{1}{c}{_PS_} & _Pix4D_ & _ReCap_ \\ \hline \# oriented img & 359 & 359 & 359 \\ \hline comp. time & 1h 10\({}^{\circ}\) & 41\({}^{\circ}\) & 3\({}^{\circ}\) 15\({}^{\circ}\) \\ \hline img space error & 1.01 0\({}^{\circ}\) & 1.03 px\({}^{\circ}\) & 0.75 px\({}^{\circ}\) \\ \hline \# 3D pts & 597,985 & 1,508,105 & 797,1241 \\ \hline \# pts in 2 img & 215K (-36\%) & 880K (-53\%) & N/A \\ \hline \# pts in 3 img & 105K (-18\%) & 222K (-15\%) & N/A \\ \hline \# pts in 4 img & 80K (-14\%) & 109K (-7\%) & N/A \\ \hline max multiplicity & 69 (2) & 70(9) & N/A \\ \hline RMSE CP & 1.2/2.1/1.3 & 2.2/1.7/1.7 & 2.2/1.8/1.3 \\ X/YZ [cm] & & & \\ \hline \end{tabular}
Comments:
\(\bullet\) the BA is carried out in free-network, i.e. without any prior knowledge or constraints. The RMS error on CPs is computed after a seven-parameter Helmert transformation to obtain the photogrammetric model in the coordinate system defined by the GCPs;
\(\bullet\) the significant processing speed of ReCap is clearly noticeable;
\(\bullet\) despite a high value of max multiplicity, its value drops immediately after image pairs and this may cause instability effect in the network orientation;
\(\bullet\) very similar accuracy in object space is achieved.
\begin{tabular}{|l|c|c|} \hline \multicolumn{4}{|c|}{**DATASET 2 - Trento's cathedral (565 images)**} \\ \hline \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{_PS_} & _Pix4D_ & _ReCap_ \\ \hline \# oriented img & 565 & 565 & 565 \\ \hline comp. time & 37\({}^{\circ}\) & 2h 13\({}^{\circ}\) & 23\({}^{\circ}\) \\ \hline image space error & 1.51 px\({}^{\circ}\) & 0.42 px\({}^{\circ}\) & 0.41 px\({}^{\circ}\) \\ \hline \# 3D pts & 141,429 & 1,567,561 & 3,383,174 \\ \hline \# pts in 2 img & 1,093K (-77\%) & 971K (-62\%) & N/A \\ \hline \# pts in 3 img & 195K (-14\%) & 264K (-17\%) & N/A \\ \hline \# pts in 4 img & 58K (<5\%) & 119K (<10\%) & N/A \\ \hline max multiplicity & 49(1) & 48(1) & N/A \\ \hline \end{tabular}
Comments:
\(\bullet\) although all images are oriented by the three software applications, Pix4D does not provide a correct solution for the circular network. An incorrect orientation is achieved even if the images are imported in different orders (Fig.1);
\(\bullet\) most of the 3D points are triangulated from only 2 views.
* the combination of terrestrial and UAV images is not easily handled and the two sub-blocks are hardly completely oriented together.
* the image shuffling is not facilitating the orientation of the entire datasets.
* Dortmund (59 images)**
* Dortmund (59 images)**
* **PS** & _Pix4D_ & _ReCap_
* **# oriented img** & _59_ & _59_ & _59_ & _RMSE_ & \(S\) & \(S\) & \(S\) & \(S\) & \(S\) & \(S\) & \(S\) & \(S\) & \(S\)
photogrammetry. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Vol. II-3W4, pp. 135-142
* [PERSON] and [PERSON] (2006) [PERSON], [PERSON], 2006: Numerical Optimization. Springer Verlag
* [PERSON] et al. (2014) [PERSON] [PERSON], [PERSON], [PERSON] [PERSON], 2014. Accuracy of typical photogrammetric networks in cultural heritage 3D modeling projects. Int. Archives of Photogrammetry, Remote Sensing & Spatial Information Sciences, Vol. XL-5, pp. 465-472
* the REPLICATE EU project. Int. Archives of Photogrammetry, Remote Sensing & Spatial Information Sciences, Vol. XLII-2-W3, pp. 535-541
* [PERSON] and [PERSON]'s (2006) [PERSON] [PERSON], [PERSON] [PERSON] 2006: A multiresolution and optimization-based image matching approach: An application to surface reconstruction from SPOTS-HRS stereo imagery. Int. Archives of Photogrammetry, Remote Sensing & Spatial Information Sciences, 36(W41), pp. 1-5
* [PERSON] (1999) [PERSON], 1999: Self-calibration and metric 3D reconstruction from uncalibrated image sequences. Ph.D. dissertation, ESAT-PSI, K.U. Leuven
* [PERSON] et al. (2008) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], et al., 2008: Detailed real-time urban 3d reconstruction from video. IJCV, Vol. 78(2), pp. 143-167
* A critical overview. LNCS Vol. 7616. pp. 40-54
* [PERSON] et al. (2013) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2013: Design and implement a reality-based 3D digitisation and modelling project. Proc. IEEE Conference \"Digital Heritage 2013\", Vol. 1, pp. 137-144
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2014. State of the art in high density image matching. The Photogrammetric Record, Vol. 29, pp. 144-166
* [PERSON] et al. (2012) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], and [PERSON], 2012: SURE: Photogrammetric surface reconstruction from imagery. Low-Cost 3D Workshop, pp. 9 Berlin, Germany
* [PERSON] et al. (2011) [PERSON] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2011: ORB: An efficient alternative to SIFT or SURF. Proc. ICCV
* [PERSON] and [PERSON] (2016) [PERSON], [PERSON], [PERSON], [PERSON], 2016: Structure-from-motion revisited. Proc. CVPR
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2017: A multi-view stereo benchmark with high-resolution images and multi-camera videos. Proc. CVPR.
* [PERSON] et al. (2008) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2008: Modeling the world from Internet photo collections. IJCV, Vol. 80(2), pp. 189-210
* [PERSON] et al. (2012) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2012: LDAHash: improved matching with smaller descriptors. IEEE Transactions PAMI, Vol. 34(1)
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2015: Optimizing the viewing graph for structure-from-motion. Proc. CVPR
* [PERSON] (2010) [PERSON], [PERSON], 2010. Computer vision: algorithms and applications. Springer Science & Business Media
* [PERSON] et al. (2013) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2013: Live metric 3D reconstruction on mobile phones. Proc. IEEE ICCV
* [PERSON] and [PERSON] (2015) [PERSON], [PERSON], [PERSON], [PERSON], 2015: The maximal self-dissimilarity Interest point detector. Transactions on Computer Vision and Applications, Vol. 7, pp. 175-188
* [PERSON] et al. (1999) [PERSON], [PERSON], [PERSON] and [PERSON], 1999: Bundle adjustment--a modern synthesis. Int. Workshop on Vision Algorithms, pp. 298-372, Springer Berlin Heidelberg
* [PERSON] (1979) [PERSON], 1979: The interpretation of structure from motion. Proc. Royal Society of London B: Biological Sciences, Vol. 203(1153), pp. 405-426
* examining the influence of decoloxitization methods on interest point extraction and matching for architectural image-based modelling. Int. Archives of Photogrammetry, Remote Sensing & Spatial Information Sciences, Vol. 40(5/W4), pp. 307-314
* [PERSON] (2013) [PERSON], 2013: Towards linear-time incremental structure from motion. Proc. 3 DV
|
isprs
|
A CRITICAL REVIEW OF AUTOMATED PHOTOGRAMMETRIC
PROCESSING OF LARGE DATASETS
|
F. Remondino, E. Nocerino, I. Toschi, F. Menna
|
https://doi.org/10.5194/isprs-archives-xlii-2-w5-591-2017
| 2,017
|
CC-BY
|
isprs/50dccdf3_c9a4_4744_a7ed_fd36ee37e36f.md
|
# Decision Fusion for the Unsupervised Change Detection of Multitemporal Sar
###### Abstract
The unsupervised change detection of multitemporal SAR usually can get different results through different threshold selection algorithms. It is hard to determine what kinds of results are the best. In this paper, a novel automatic approach to the unsupervised identification of changes in multitemporal SAR image is proposed. This approach, unlike traditional ones, fuses many kinds of change detection results based on fuzzy logic theory. There are two steps in the proposed approach. In the first step, multitemporal SAR images are processed by many threshold selection algorithms separately, and several kinds of change detection maps are generated. In a second step, a framework for combing information from several individual threshold selection algorithms is proposed based on fuzzy logic theory. The robustness of the proposed approach is tested and validated with four threshold selection algorithms on SAR images in two regions. Experimental results, obtained on two sets of multitemporal SAR images, prove the validity and robustness of the proposed approach compared to each threshold selection algorithm.
SAR; Unsupervised change detection; Information fusion; Fuzzy sets theory
## 1 Introduction
Change detection using remote sensing images is an important application domain in remotes sensing. It finds the changes that occurred in land covers by analyzing multitemporal images acquired at different time. Automatic change detection in images of a given scene acquired at different times is one of the most interesting topics in image processing. Recently, with the development of data processing and sensors, change detection is applied in many applications such environment monitoring, land evaluation, forest coverage assessment, disaster estimation, and urban changes. Change detection in multitemporal remote sensing images is characterized by several factors and it can be classified to three classes according to the change detection approaches. The first approach is based on classification. It determines the results by analyzing before and after classification results. The second approach compares and analyzes multitemporal remote sensing images based on pixels. Comparison and analysis of multitemporal remote sensing images based features is the third approach. Several change detection methods have been proposed in remote sensing literature, such as difference image, ratio image approach, change vector analysis, VI(vegetation index), and PCI(primary component ). [PERSON] used several threshold selection methods in the change detection in optical remote sensing and the results obtained on each algorithm are compared and analyzed. In [], [PERSON] proposed an unsupervised change detection method based on general gauss distribution model, and corresponding experiment results using multitemporal SAR images proved the efficiency and advantage of this method. At the same time, Model variables, speckle noise and factors influencing threshold were discussed. [PERSON] developed a Fisher transform method based on a ratio algorithm using SAR images and combined EM algorithm and MRF theory to detect changes. It proved that this approach appeared ascendant characteristic of robustness in resisting noise.
All these methods forementioned have their own characteristics and advantages in change detection with multitemporal SAR images. However, they always obtains different results according to different methods for a given data sets. Therefore, none of them strictly outperforms all the others. So, it is a challenging task to determine which one is the best, because: 1) in an unsupervised change detection process, ground truth is unavailable and prior knowledge can not be obtained, so the choice is not appropriate. 2) the effectiveness of a thresholding algorithm depends on statistical characteristics of the difference image, however, it is not always specific for a given data set.
In this paper, we propose to aggregate the results of different change detection approaches to reach robuster final decisions than any single algorithm. Decision fusion can be defined as the process of fusing information from several individual data sources. Therefore, an approach based on decision fusion using fuzzy logic theory is proposed. The proposed algorithm is based on fuzzy sets and possibility theory. The framework of the algorithm is modeled as follows. For a given data set, n change detection results are obtained according to each algorithm. For an individual pixel, each algorithm provides an output a membership degree for each of the considered detection algorithms. The set of these membership values is then modeled as a fuzzy set. Fusion strategy is performed by aggregating the different fuzzy sets provided by the different detection algorithm. It is adaptive and does not require any further training.
The paper is organized as follows. Fuzzy set theory and measures of fuzziness are briefly presented in section II-A. Section II-B present the model for each detection in terms of fuzzy set. Particular information fusion is discussed in section II-C. Membership degrees of individual pixel based on neighbor system is described in section III-A, and fuzzy degree calculation method is analyzed in section III-B, and then experimental results are presented and analyzed in section IV. Finally, conclusions are drawn.
## 2 Fuzzy Logic Fusion Model Formulation
Let \(X_{0}\) and \(X_{1}\) be two coregistered SAR images acquired over the same area at times \(I_{0}\) and \(I_{1}\), respectively. The change detection problem is formulated as a binary classification problem by marking each pixel with \"changed\" or \"unchanged\" labels. And each pixel is mapped into the set \(\Omega=\{\omega_{c},\omega_{n}\}\) of possible labels, where \(\omega_{c}\) and \(\omega_{n}\) represent the unchanged and changed classes, respectively. The image-ratiing approach, which generates a ratio image \(R\) by dividing pixel-by-pixel is adopted. Let us consider an ensemble of M different detection algorithms. Let \(R_{i}\) (i=1,2, M) be the change map generated by the _th_ detection algorithm of the ensemble. The aim of the proposed approach is to generate a global change map by ensemble of change map. The proposed automatic and unsupervised change-detection approach includes some main steps(see Fig.1): 1)preprocessing based on statistical filtering: 2)comparison of the pair of multitemporal SAR images and getting several detection results by different algorithms; 3)calculating membership degree of individual pixel based on neighbor system according to statistical distribution model; 4) creating final change map based on fuzzy logic theory.
Fig.1 Flow chart of the proposed decision fusion approach
### Fuzzy set theory
A fuzzy subset \(F\) of a reference set \(U\) is a set of ordered pairs \(F=\{(x,u_{F}(x))\mid x\in U\}\). Where \(u_{F}:U\rightarrow\)[0,1] is the membership function of \(F\) in \(U\).
1) Logical operation: classical Boolean operations extend to fuzzy sets, with \(F\) and \(G\) as two fuzzy sets, classical extensions are defined as follows:
1. Union: The union of two fuzzy sets is defined by the maximum of their membership function. \(\forall x\in U,(u_{F}\cup u_{G})(x)=\max\{u_{F}(x),u_{G}(x)\}\)
2. Intersection: The intersection of two fuzzy sets is defined by the minimum of their membership function. \(\forall x\in U,(u_{F}\cap u_{G})(x)=\min\{u_{F}(x),u_{G}(x)\}\)
3. Complement: The complement of a fuzzy set F is defined by \(\forall x\in U,u_{F}(x)=1-u_{F}(x)\)
2) Measures of Fuzziness: Fuzziness is an intrinsic property of fuzzy sets. To measure how fuzzy a fuzzy set is, and thus estimate the ambiguity of the fuzzy set, several definitions have been proposed. [PERSON] proposed to define the degree of fuzziness as a function f with the following properties.
1. \(\forall F\subset U,\) if \(f(u_{F})=0\) then \(F\) is a crisp set.
2. \(f(u_{F})\) is maximum if and only if \(\forall F\subset U,u_{F}(x)=0.5\)
3. \(\forall(u_{F},u_{G})\in U^{2},f(u_{F})\geq f(u_{G})\) if: \(\forall x\in U\) \(u_{G}(x)\geq u_{F}(x),\)if \(u_{F}(x)\geq 0.5\) \(u_{G}(x)\leq u_{F}(x),\)if \(u_{F}(x)\leq 0.5\)
4. \(\forall F\subset U\), \(f(u_{F})=f(u_{F})\).
A set and its complement have the same degree of fuzziness.
5. \(\forall(u_{F},u_{G})\in U^{2},\) \(f(u_{F}\cup u_{G})+f(u_{G}\cap u_{F})=f(u_{F})+f(u_{G})\)
### Change map representation
A changed and unchanged problem is considered for which m different change maps are available. For a given pixel, the output of _ith_ detection algorithm is the set of numerical values, i.e
\[\{u_{i}^{c}(x),u_{i}^{*}(x)\}\]
Where \(u_{i}^{j}(x)\in\)[0,1],after normalization, is the membership degree of pixel \(x\) to changed or unchanged classes according to algorithm \(i\). The higher this value, the more likely the pixel belongs to the corresponding class. Depending on the detection algorithm, \(u_{i}^{j}(x)\) can be of a different nature: probability, membership degree at the output of a fuzzy classifier. In any case, the set \(\pi_{j}(x)=\{u_{i}^{j}(x),\ j=1, ,\ n\}\) can be regarded as a fuzzy set. Therefore, for each pixel, m fuzzy subsets are computed and create input for the change map fusion process.
\[F_{set}(x)=\{\pi_{1}(x),\pi_{2}(x), ,\pi_{i}(x), ,\pi_{m}(x)\}\]
Fuzzy set theory provides various combination operators to aggregate these fuzzy sets. Many combination operators are discussed detailedly in the[1]
## 3. Membership degree
### Membership degree
#### Membership degree based on spatial-contextual model
Let r be a fixed positive number, and let \(N_{i}(r)\);\(i\in D\) be a system of neighborhoods defined by
\[N_{i}(r)=\{j\in D\mid 0<d(i,j)\leq r\}\]Where \(d(i,j)\) denotes the distance between centers of pixels i and j. Hence, the value r means a radius of the neighborhood. Fig.() illustrates the neighborhood with unit radius(first-order neighborhood). The second-order neighborhood is expressed by the configuration of the area D. and a number of neighborhood \(|\ N_{i}(r)|\) is dependent to the local configuration around the pixel i.
Set \(\eta\) is the neighbor system of pixel i, \(\eta=\{\eta_{i},i\in X,i\
ot\in\eta_{i},\eta_{i}\subset X\}\) and \(\{\eta^{1},\eta^{2}, ,\eta^{n}\}\) be constituted by a series of neighbor system. m is the rank of neighbor system.
\(\eta^{m}=\{(k,l):0<(k-i)+(l-j)\leq m\}\), Figure.2 is neighbor system.
\begin{tabular}{|c|c|c|c|c|} \hline
5 & 4 & 3 & 4 & 5 \\ \hline
4 & 2 & 1 & 2 & 4 \\ \hline
3 & 1 & 0 & 1 & 3 \\ \hline
4 & 2 & 1 & 2 & 4 \\ \hline
5 & 4 & 3 & 4 & 5 \\ \hline \end{tabular}
Fig.2 Sketch of neighbor system
On the hypothesis of obtaining \(M\) change map \(A_{i}=(i=1,2,..M)\) according to a series of change detection algorithms and each objective pixel will get \(M\) membership degrees. Membership degree combines information from the neighbor system of objective pixels. Each pixel \(x_{i}\), \(x_{i}\in\omega_{c}\), is regard as objective pixel, and membership degree is calculated on the theory of the difference between the objective pixel and neighbor pixel. Set D=\([d^{k}_{i,j}]\), \(d^{k}_{i,j}\) is the difference measure of pixel \(\alpha^{k}_{i,j}\) and neighbor system, \(\alpha^{k}_{m,n}\in\omega_{c}\).
\[d^{k}_{i,j}=\sum_{m,n\in\eta_{i,j}}\delta(a^{k}_{i,j},a^{k}_{m,n})/\mbox{N}( \eta_{i,j}) \tag{1}\]
\(\delta(a^{k}_{i,j},a^{k}_{m,n})\) is the indicator function, which allows to obtain the difference of \(a^{k}_{i,j}\) and \(a^{k}_{m,n}\). It is defined as
\[\delta(y_{i},y_{j})=\begin{cases}1,&\mbox{ if }&a^{k}_{i,j}=a^{k}_{m,n}\\ &0,&\mbox{ otherwise }\end{cases} \tag{2}\]
Set \(U=[u^{1}_{i,j},u^{2}_{i,j}, ,u^{M}_{i,j}]\) as the membership degree set of objective pixel \(a^{k}_{i,j}\), based on \(\Gamma\) distribution function, \(u\) (\(X\)) is defied as
\[u(x)=\begin{cases}1&0\leq x<t\\ e^{-k(x-t)}&x\geq t(k>0)\end{cases} \tag{3}\]
#### B.Information fusion
The purpose of data fusion combing information from several sources is to improve the final decision. Generally, the following three operators are used in membership degree information fusion.
Conjunctive combination:
\(\pi_{{}_{\wedge}}(x)=\cap^{m}_{i=1}\pi_{{}_{i}}(x)\),
\(\pi_{{}_{\wedge}}(x)\leq\min(\pi_{{}_{i}}(x)),i\in[1,m]\)
Disjunctive combination:
\(\pi_{{}_{\wedge}}(x)=\cup^{m}_{i=1}\pi_{{}_{i}}(x)\),
\(\pi_{{}_{\wedge}}(x)\geq\max(\pi_{{}_{i}}(x)),i\in[1,m]\)
\(\mbox{\small{Compromise combination:}}\)
\(\pi_{{}_{\wedge}}(x)=\cup^{m}_{i=1}\pi_{{}_{i}}(x)\),
\(\pi_{{}_{\wedge}}(x)\geq\max(\pi_{{}_{i}}(x)),i\in[1,m]\)
fuzzy degree is defined as
\[d(u)=\frac{1}{\ln 2}[-u_{{}_{c}}(x)\ln u_{{}_{c}}(x)-u_{{}_{u}}(x) \ln(u_{{}_{u}}(x))]\]
\[=\frac{1}{\ln 2}[-u_{{}_{c}}(x)\ln u_{{}_{c}}(x)-(1-u_{{}_{c}}(x))\ln(1-u_{{}_{c} }(x))]\]
## 4 Experimental analysis and results
To assess the performance of the proposed approach, two multitemporal data sets corresponding to geographical areas of Wuhan, Hubei province, China and the Poyang Lake, Jiangsi province, China are our experimental data. The first two data sets used in the experiments are composed of two images acquired in the same area by SAR on a satellite. The area shown in Figure1 was a section (195*204 pixels) of a scene acquired in the middle southern of Wuhan, Hubei province, China. The second data set used in experiments was composed of a section (273*285 pixels) of two SAR image from the same sensor too. The two images were acquired in the Poyang Lake, Jiangsi province, China, as shown in Figure.3 (b). By comparing two images at the same area, we can see water body covered a significant part of land in the selected area. Water body in SAR image usually appears \"black\" region.
After geometrical correction, using log-farioing operator, \(X=\log(X_{{}_{1}}/X_{{}_{2}})\),difference image \(X\) can be obtained.
But there is no obvious discrimination between \(\omega_{c}\) and \(\omega_{{}_{u}}\) by analyzing the histograms of the two difference images. After the noise removement using Gamma-MAP by *77 pixels window, there is great improvement of discrimination between \(\mathit{O}_{c}\) and \(\mathit{O}_{u}\). Therefore, we can conclude that speckle noise is a key factor that influences multitemporal SAR images change detection. The two difference images are obtained by log-ratioing operator are shown in Figure.4
In order to prove the advantage of the proposed approach compared with a single change detection algorithm, a series of algorithms are employed, such as circular segmentation, Otsu, KSW and KI(Kittler and Illingworth). Circular segmentation algorithm is a simple approach in image processing, using single threshold of histogram. Otsu is a histogram segmentation method based on maximization of variation between classes. KSW algorithm is an automatic threshold selection method. Shannon entropy is introduced in the method, and its threshold is acquired maximizing the distribution information of target and background information. KI algorithm is based on minimal error by Beysin estimation theory. The hypothesis is that \(\mathit{O}_{c}\) and \(\mathit{O}_{u}\) follow a certain distribution model, such as Gaussian model. Four different kinds of change maps are obtained according to corresponding algorithms, the threshold \(\mathit{T}_{i}\) and results are illustrated in Table.1 and Figure.5(a) \(\sim\) Figure.5(d) respectively.
The objective of this experiment is to assess the effectiveness of the proposed membership degree function based on spatial-contextual models. Based on each change map \(\mathit{\bar{A}}_{i}\), we can get membership degree of objective pixel using formula (3). The results is shown in Figure.4. Finally, we can get the final change map by applying the entropy fuzzy degree technique (formula (4)) to fuse series of membership degree maps and the results are shown in Figure.5(e).
\begin{tabular}{|l|l|l|l|} \hline Threshold selection algorithm & Wuhan & & Poyang Lake \\ \cline{2-4} & threshold & Change ratio & threshold & Change ratio \\ \hline circular segmentation & 143 & 16.7\% & 142 & 10.3\% \\ \hline Otsu & 159 & 11.3\% & 150 & 9.7\% \\ \hline KSW & 175 & 10.8\% & 141 & 10.4\% \\ \hline KI & 193 & 9.7\% & 93 & 17.5\% \\ \hline FDF(fuzzy decision fusion) & & 12.1\% & & 12\% \\ \hline \end{tabular}
Tab.1 Segmentation results of series of algorithms
## 5 Conclusion
In this research, a novel automatic approach to unsupervised change detection in multitemporal SAR images is proposed. The presented approach is based on fuzzy set theory and fuses an ensemble of change maps. A series of algorithms are used to obtain different change maps, which are explored in the fuzzy logic fusion algorithm as input data sets. The approach in this paper generates the change map by taking into account the spatial-contextual information contained in the ensemble of change maps. The proposed approach presents two important advantages over the single change detection algorithm of the ensemble. Firstly, it provides a well-founded methodological framework for automatic analysis of an ensemble of change maps and can get robust results compared with single change map of ensemble. Secondly, spatial-contextual information is exploited in this fusion algorithm.
Experimental results reported in this paper show the effectiveness of the proposed approach. An important characteristic of the proposed approach is that it does not need any priori knowledge of changed and unchanged pixels in different images. Therefore, it can be also applied to multitemporal SAR images. However, the proposed technique is not noise resisting. Speckle noise is the characteristic of SAR image. Therefore, how to reduce the influence of speckle noise and get more accurate change map will be our future work.
## References
* [PERSON] (1989) [PERSON], 1989, Digital Change Detection Techniques Using Remotely Sensed Data. International Journal of Remote Sensing 10(6), 989-1003
* [PERSON] (2000) [PERSON], 2000, Markovian Fusion Approach to Robust Unsupervised Change Detection in Remotely Sensed Imagery. IEEE Transactionson Geoscience and Remote Sensing, 38 (3).
* [PERSON] An Unsupervised Approach Based on the Generalized Gaussian Model to Automatic Change
* [PERSON] (2007) [PERSON], 2007, Unsupervised change detection from multichannel SAR images, IEEE Geoscience and Remote Sensing Letters, 4(2), pp. 278-282.
* [PERSON] (2006) [PERSON], 2006,Decision Fusion for the Classification of Urban Remote Sensing Images. IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 44(10), OCTOBER [PERSON], 2006, Classification of remote sensing images from urban areas using a fuzzy possibilistic model, IEEE Geosci. Remote Sens. Lett., 3(1), pp. 40-44
## Acknowledgements
This work was supported by National 863 Plan(2006A12Z136), Doctoral Degree Program Fund(20060486041) and Wuhan construction Committee (200708)
|
isprs
|
Fast unsupervised deep fusion network for change detection of multitemporal SAR images
|
Huan Chen, Licheng Jiao, Miaomiao Liang, Fang Liu, Shuyuan Yang, Biao Hou
|
https://doi.org/10.1016/j.neucom.2018.11.077
| 2,019
|
CC-BY
|
isprs/055ca86f_bf7f_457e_8eb4_4fa7ddc4c290.md
|
# Robust Metric based Anomaly Detection in Kernel Feature Space
[PERSON]
1 School of Computer Science, Wuhan University
2 The State Key Laboratory of Information Engineering in
Surveying, Mapping, and Remote Sensing
Wuhan University, P.R. China.
[PERSON]
2 The State Key Laboratory of Information Engineering in
Surveying, Mapping, and Remote Sensing
Wuhan University, P.R. China.
[PERSON]
2 The State Key Laboratory of Information Engineering in
Surveying, Mapping, and Remote Sensing
Wuhan University, P.R. China.
###### Abstract
This thesis analyzes the anomalous measurement metric in high dimension feature space, where it is supposed the Gaussian assumption for state-of-art mahanlanobis algorithms is reasonable. The realization of the detector in high dimension feature space is by kernel trick. Besides, the masking and swamping effect is further inhibited by an iterative approach in the feature space. The proposed robust metric based anomaly detection presents promising performance in hyperspectral remote sensing images: the separability between anomalies and background is enlarged; background statistics is more concentrated, and immune to the contamination by anomalies.
**Keywords**: anomaly detection, hyperspectral images, Manhanlobis distance
## Introduction
Anomaly targets in hyperspectral images (HSI) refer to those deviating obviously from the other background pixels, especially by means of the spectral feature [1]. Typical ones are the man-made objects in nature scene, such as the vehicles in a grass field. State-of-arts methods mainly evaluate it by exploiting the distance of an observing pixel to the background statistics center. So the key is the background statistics, or the anomalous metric. RX and its variants take use of a Manhanlobis distance from background statistics [2]. In spite of their effectiveness, they are proved to be susceptible to the masking and swamping effect, due to the contaminated background statistics [3]. Multivariate outlier detection methods, focusing to alleviate this effect, figure out a more robust metric by eliminating the probable background pixels or a contracting iteration procedure to obtain a new covariance matrix [3; 4]. Traditional ways include iterative exclusion algorithm, witheach iteration excluding the most anomalous samples until the rest samples unchanged. Then the metric by the rest samples is believed to be immune to the anomalies, or a robust one. However, the robust metric anomaly detection methods don't take into consideration the nonlinear relationship between different bands of the hyperspectral images. The Gaussian assumption of the target present hypothesis and target absent hypothesis may not be valid either [5]. Besides, on the boundary between background and anomaly it is very common to find the mixed pixels. So does it when the size of anomaly is smaller than the images' spatial resolution. The cases become much worse when the pixels are seriously mixed, or the nonlinear mixed pixels. Kernel based anomaly detectors, such as kernel-RX, have been developed to solve the above problem, while the metric in high dimension feature space is not robust since the anomaly pixels may be contained in the background gram matrix.
This thesis proposes a new anomaly detector by exploiting a robust metric in the kernelized feature space. The idea is shown in Fig. 1, where vectored pixels in original feature space may not be fit for Gaussian distribution, but it is the case for some high dimension feature space like the middle picture with the counter corresponding to the Manhanlobis metric. But only with the metric excluding the anomaly pixels can the real anomalous degree of the anomalies be presented, shown in the last picture.
### Robust Anomalous Metric in High Dimension Feature Space
Traditional target detection methods exploit the linear separability between targets and backgrounds signals [6]. Classic approaches include subspace model and linear mixture model [6]. Due to the lack of prior information on targets, anomaly detection methods depend on the
Figure 1: The schematic flow sheet of our method. The black dots represent the background pixels, and the red ones corresponding to the anomaly targets.
measurement metric from background pixels, where the background pixels dataset is usually contaminated since it is usually composed of all the pixels in the image [7]. Robust Mahanlobis distance based methods construct an iterative procedure [4]. In each iteration the first and second order statistics are computed to figure out the anomalous distance, then the pixels presenting distance larger than a predefined threshold would be excluded and the statistics are updated by the rest dataset. The iteration is done until the rest dataset wouldn't change. The underlying idea is that the contamination by anomalies can be gradually eliminated by a dataset shrinking procedure, and the anomalies can be detected at the same time.
Hyperspectral images contain a large number of spectral bands. Though state-of-art methods prove promising in separating anomalies from backgrounds, the nonlinear correlations between different spectral bands are not considered. Different materials present spectrally absorption at different spectral position, so that the nonlinear correlations are not evitable. Kernel based anomaly detection methods have made great success, the typical ones are kernel-RX. Another factor that needs further investigation is that the mixed manner of each pixel is much more complex than linear mixture model. As the spatial resolution is limited, intimate mixture, instead of linear mixture, is more widespread and reasonable [8]. In intimate mixtures, the photons are interacting with all the materials simultaneously. In linear mixtures, the assumption was the photons scattered off one material at a time. Since intimate mixtures have multiple different materials in close relation to one another, the photons can bounce from one particle to another causing different absorption and reflection effects. The result is mixing that cannot be well captured by simple linear models [8]. Inspired by kernel-RX and robust anomaly detection methods, we proposed the new robust one, with the detailed steps are presented as following:
Step 1: Since the gram matrix is usually N\(\times\)N with N being the number of background pixels, it is not possible to consider all the pixels at one time otherwise it would exceed the computing capacity very easily. So a k-means clustering method is employed to segment the dataset into k classes.
Step 2: For each clustered class, all the pixels are projected into the high feature space \(x\rightarrow\boldsymbol{\phi}(x)\), constituting a new dataset \(D\). It is assumed that
Step 3: The statistics of these projected pixels from \(D\) are figured out, including mean \(m\) and covariance \(C\).
Step 5: Eliminate the pixel with the distance larger than a predefined threshold to update \(D\). Some adaptive thresholding approach can be used. For simplicity, a percentage of the number of the samples is used. For example, the largest 1% samples can be excluded.
Step 6: Iterate Step 2. to Step 4. until the dataset \(D\) doesn't change.
Step 7: After all the clustered classes are performed, the pixels excluded in each clustered class are labeled as the anomaly targets.
## 10 Experiments and anslysis
Experiments on real-world hyperspectral images have been done to evaluate our proposed method, including air-born hyperspectral remote sensing images and near scene hyperspectral images. Five rows of panels distribute in the scene and considered as anomalies, shown in Fig.2. Several state of art methods are used as comparison ones. Our method iterates 5 times until the results keep stable. Preliminary experiments results with AVIRIS hyperspectral images are shown in Fig. 3. It is obvious that our proposed method did best among all the methods. Considering the mixed boundary anomaly pixels, the performances of different methods for different kinds of anomalies are also presented in Table 1, which further reveal that the improvement of our method is partly due to the superior performance on the transition boundary pixels.
## Conclusion
Combining the robust Manhanlobis anomaly detection methods and nonlinear mixture models, a robust metric based anomaly detection method in kernel feature space is proposed. Experiments reveal that the proposed method does provide a more reliable and robust metric for anomaly detection from hyperspectral remote sensing images, especially for detecting the ones on resolved pixels.
## References
* 1589, 2011.
* [2] [PERSON] and [PERSON], \"Adaptive multiple-band CFAR detection of an optical pattern with unknown spectral distribution,\" _IEEE Trans. Acoust., Speech, Signal Process_, vol. 38, no. 10, pp.1760-1770, 1990.
* [3] [PERSON] and [PERSON], \"A Comparison of Multivariate Outlier Detection Methods for Finding Hyperspectral Anomalies,\" _Military Operations Research_, vol. 13, no. 4, pp. 19-44, 2008.
* [4] [PERSON], \"Hyperspectra Improved Anomaly Detection and Signature Matching Methods,\" _Ph.D. dissertation_, Air Univ., Ohio, 2007.
* [5] [PERSON] and [PERSON], \"Kernel RX-Algorithm: A Nonlinear Anomaly Detector for Hyperspectral Imagery,\" _IEEE Transactions on Geoscience and Remote Sensing_, vol. 43, no. 2, pp. 388-397, 2005.
* [6] [PERSON] and [PERSON], \"Detection Algorithms for Hyperspectral Imaging Applications,\" _IEEE Signal Processing Mag._, vol. 19, no. 1, pp. 29-43, Jan. 2002.
* [7] [PERSON] and [PERSON], \"Anomaly Detection and Classification for Hyperspectral Imagery,\" _IEEE Trans. Geosci. Remote Sens._, vol. 40, no. 6, pp. 1314-1325, 2002.
* [8] [PERSON], [PERSON], [PERSON], and [PERSON], \"Nonlinear neural network mixture models for fractional abundance estimation in AVIRIS hyperspectral images,\" in _Proc. XIII JPL Airborne Earth Science Workshop_, Pasadena, CA, 2004.
[9] [PERSON] and [PERSON], \"A comparison of kernel functions for intimate mixture models,\" in _Proc. of IEEE WHISPERS '09_, Grenoble, France, Aug. 2009, pp. 1-4.
|
isprs
|
Robust Metric based Anomaly Detection in Kernel Feature Space
|
B. Du, L. Zhang, H. Xin
|
https://doi.org/10.5194/isprsarchives-xxxix-b7-113-2012
| 2,012
|
CC-BY
|
isprs/5ee1e6e2_c1af_44f7_b0b6_1f9bc6a9cc1e.md
|
down to a dozen meters in diameter. That might provide unprecedented precision for age estimation of the surface of the Moon. However, the high resolution and amount of data also increases the time required to identify these geological features and suggests the development of an automated counting system.
In recent years, a number of computer vision techniques have been developed to address the task of automated crater counting. They can be split in several groups, one group based on edges and Hough Transform as in ([PERSON] et al., 2006) who propose a multistage approach on edge detection and fuzzy Hough Transform. It uses multiple heuristics methods to detect impact craters. A crater detection rate of 80% compared to manual detection is obtained by ([PERSON] et al., 2006). Another group is the neural networks based approach as in ([PERSON] et al., 2016), who propose a CNN (Convolutional Neural Networks) technique. In contrast to other approaches, CNNs do not have to use handcrafted features, instead they learn optimal filters and features from training examples (for an overview see, e.g., [PERSON] et al., 2003). A drawback of the CNN approach is the need for a large number of examples to train on, which need to be extracted and labelled, as well as long training times. Recently, methods not based on images but on topographic maps have been developed to address crater counting ([PERSON] et al., 2013). In contrast to image based methods, they are not affected by illumination, visual surface properties and atmospheric conditions.
In this work, we have applied the image-based template matching crater detection algorithm developed by ([PERSON] and [PERSON], 2013) which depends on a few parameters, such as a threshold of similarity between the impact crater template and the actual image.
## 3 Crater Detection via Template Matching
Template matching is a technique that finds areas within an image that are similar to a template image, where usually the sliding window approach is employed. The window starts sliding from the initial position and is shifted by a given increment. For every step, a similarity measure is calculated. Common measures include cross-correlation and the sum-of-squared differences between template and image. See, e.g., ([PERSON], 2009) for an overview.
To detect craters within our testing area, the template matching algorithm of ([PERSON] and [PERSON], 2013) is applied. It uses six templates which represent six different 3D models of small craters. They were constructed using laser altimeter tracks (LOLA) ([PERSON] et al., 2010), that contain full cross sections of satellite craters of the lunar crater Plato. Each crater is split in its centre point to extract two different profiles. Then these curvature models were rotated symmetrically around the crater centre to generate two-dimensional surfaces in 3D space. From these surfaces, crater templates could be obtained by applying the Hapke model ([PERSON], 1984, 2002) to render crater template images based on the known directions towards the sun and the viewer. The obtained templates are grey scale images as shown in Figure 1, which are then re-scaled to match a given crater diameter with respect to the image. The similarity between the template and the image is computed by cross-correlation.
This procedure is repeated for a specified range of diameters, the generated template is applied to the original image window by window, and the cross-correlation coefficient is computed for every position on the image. When the cross-correlation exceeds a given threshold, the crater is added to the list of candidates. This method yields a list of observed craters and their positions and diameters in pixel units. The final detection step consists of a removal of multiple detections of the same crater at slightly different locations with slightly different diameters ([PERSON] et al., 2016).
Choosing the optimal threshold value is important because a low threshold would result in a large number of false detections and a high one would result in missing craters. The choice of threshold values depends basically on factors such as surface, illumination and observation angle of the image.
One reliable way to calibrate this threshold is to use an area with manually counted craters. The area is split into three parts. A threshold for each area is estimated so as to closely match the corresponding manual count. The result consists of three slightly different thresholds, whose mean value can be chosen as an \"optimal\" threshold value for this area.
## 4 Secondary Craters
For decades, planetary research studies have regarded the occurrence of craters on solid surface bodies as traces of direct (primary) impacts. In contrast, secondary craters were formed by impacts of pieces of material ejected during much larger primary impacts ([PERSON] et al., 2005).
Early observations of the lunar surface noted that the rays of large primary craters have a significantly higher density of small craters than the neighbouring surface. The crater size-frequency distribution (CFD) describing the quantity of craters found within a given diameter interval increases with decreasing size ([PERSON] and [PERSON], 2006).
One of the first scientists who considered these craters as secondary craters was ([PERSON], 1965). He made the prediction that remote secondary craters should predominate at crater diameters smaller than about 200 m. Secondary craters usually formed in close proximity to the primary crater and appear in characteristic chains or clusters. It has been assumed by many researchers that small craters on the terrestrial planets, except those located close to primary craters, are predominantly
Figure 1: Set of six rendered crater templates used for template matching, given typical illumination conditions.
primary craters which can be used for surface dating ([PERSON] et al., 2005).
However, studies have shown that one primary impact can create more than 100 secondary craters and that the distance from the source crater may extend more than 1000 km ([PERSON] et al., 2001). These secondary craters thus represent a significant part of the spatially random population. In contrast to primary craters, diameters of secondary craters are usually limited by the diameter of their parent craters. The upper limit of diameters for secondary craters is typically around 5% of the parent crater, as assumed by ([PERSON] et al., 2001, 2005). The secondary crater population predicted by these studies is proportional to the primary population observed in the small diameter range ([PERSON] and [PERSON], 2006).
## 5 Surfaceage Estimation
Age dating by crater counting is based upon the assumption that a new surface forms without impact craters. Over time, it is bombarded by asteroids and comets, which results in an increase of the impact crater population.
The age of a surface can be estimated by assessing the distribution of craters and fitting estimated crater size-frequency distribution (CSFD) to a so-called production function, which depends on the absolute age of the surface ([PERSON], 1983). A function to describe the total number of craters with diameters that exceed a given diameter D per unit of area has been introduced by ([PERSON], 1983), which was formulated from CSFD data extracted from different lunar areas of various ages according to
\[N(D)=\log_{10}N_{\rm cum}=\sum_{i=0}^{11}a_{i}\ x^{i},\ \ \ x=\log_{10}D \tag{1}\]
In Equation (1), \(N_{\rm cum}\) is the cumulative crater frequency that expresses the number of craters per km\({}^{2}\) with diameters exceeding \(D\) and \(q_{i}\) are the coefficients for the age estimation on the lunar surface. The logarithm of \(N_{\rm cum}\) is thus given in polynomial form ([PERSON], 1983).
## 6 Method used for Secondary Candidate Detection
In this paper, a technique for distinguishing secondary craters is proposed, which can be identified from the surrounding primary population of similar diameter based on their spatial distribution. Our testing area is the Tsiokovsky crater as shown in Figure 2, which is located at 20\({}^{\circ}\)S and 129\({}^{\circ}\)E on the farside of the Moon. It has a diameter of about 180 km. The crater Tsiokovsky is partially filled by lava-filled and has a dark and smooth floor.
According to previous studies, ages of the crater Tsiokovsky of approximately 3.8 Ga ([PERSON] and [PERSON], 1982) and 3.5 Ga [PERSON], 1988) have been found. A recent age estimation has been performed by ([PERSON] et al., 2015), who obtained 3.19 Ga (+0.08 / \(-\)0.12) Ga. In this work we use images acquired by the Terrain Camera (TC) of the lunar spacecraft Kappa ([PERSON] et al., 2008). The resolution of the image is 7.4 mpixx.
The determination of secondary crater candidates is a challenging problem. In most studies, secondary craters are removed by hand manually. One of the previous attempts to develop a method for estimation of the secondary crater population was made by ([PERSON] et al., 2005).
To develop an automatic approach to the detection of secondary craters, only well recognised criteria by which secondary craters can be distinguished should be applied. A recent approach of ([PERSON] et al., 2012) is based on the distribution of distances between the craters, in particular the mean 2\({}^{\rm nd}\)-closest neighbour distance. They concluded that the crater density obtained from an ideal random distribution is related to this statistical distance value.
The main idea of our Secondary Candidate Detection (SCD) algorithm is to detect secondary craters based on deviations of their spatial distribution from the uniform distribution of the surrounding primary craters (which is similar to [PERSON] et al., 2012, but using a different criterion for detecting crater clusters). Hence, the SCD algorithm determines whether the crater population is uniformly distributed or clustered, which allows for removing the secondary crater candidates from a crater population that is used to estimate the age of a surface part.
The first step is to obtain craters using the template matching results of the given region. Primary craters are created from a random distribution of small bodies hitting the surface. Their distribution should appear as uniform and homogeneous. Usually, secondary craters appear as high-density regions. To separate the secondary crater population from the background population, we developed an algorithm that removes secondary crater candidates in any spatial distribution. This method combines a Voronoi tessellation, a Monte Carlo simulation of a uniform distribution, and a one-tailed test of clustering which divides the detected craters into two groups based on the probability that they exhibit a non-uniform spatial distribution. A Monte Carlo method combined with a hierarchical clustering algorithm has been used by ([PERSON] et al., 2005), Voronoi tessellation has been used to detect non-uniformly distributed craters in the work of ([PERSON] et al., 2014).
The SCD algorithm recognizes distal secondary craters from the surrounding primary population based on their clustering with respect to an ideal random distribution. Similar to ([PERSON] et al., 2005), our algorithm generates a new uniformly distributed population which has the same density as the detected craters. Then Voronoi tessellation is applied to each population, and distribution parameters are calculated for each iteration of the simulation process.
The Voronoi diagram (Voronoi tessellation) is a method of subdivision of a plane into regions that comprise the part of the plane around each point within a distance that is shorter than the distances to the neighbouring points ([PERSON], 1991). Clustering can be inferred based on deviations of the local spatial density from the mean spatial density indicated by variations of the Voronoi cell area. A problem with this approach is that the statisical distribution of the Voronoi cell area for a uniform distribution cannot be derived from the observed spatial crater distribution due to its possible contamination by clustered secondary craters, such that they should be inferred from simulations ([PERSON], 2003).
The distribution of point is usually registered via a bounded observation area. This introduces edge effects to the polygons close to the boundary of the observation area. To avoid these undesirable effects, boundary polygons have to be ignored. A Voronoi diagram partitions an area around the point which represents a relative distance from a point to all its closest neighbours.
In each iteration, craters are uniformly redistributed on the surface. A new Voronoi tessellation is computed along with the areas of Voronoi polygons. After simulation of \(n\) iterations, a distribution model of Voronoi cell areas is obtained and the median and standard derivation is computed.
The clustering values, i.e., areas of Voronoi polygons, of the original crater distribution are compared to the threshold value obtained from Monte Carlo simulations of random impacts. Our algorithm detects a crater as a secondary crater candidate if is Voronoi cell area is below a threshold value, which resembles a one-tailed test of clustering.
We have also applied the SCD algorithm such that the statistical analysis is applied to several diameter intervals separately. This is expected to provide a better detection performance as the statistical spatial distribution of craters depends on their size. In this work, we have used 8 intervals with limiting diameters of 80, 170, 260, 350, 440, 530, 620, 710 and 800 m.
## 7 Results and conclusions for the reference area
The SCD algorithm has been then applied to different regions of the crater Tsiolkovsky in order to analyse impact of the secondary craters on the crater density as well as the difference of the estimated ages obtained for the CSFDs with and without secondary craters. As a first step, the template-based crater detection algorithm has been applied to a small area in the middle of Tsiolkovsky crater for which manual crater counts are available as reference data ([PERSON] et al, 2015). This reference area is of 99.867 km\({}^{2}\)([PERSON] et al, 2015) with a total number of 1967 craters, ranging from 23 m to 905 m in diameter. Figure 2 presents the resulting age which has been calculated for craters in the range of 128 to 1000 m for this small test area.
Crater detection has been performed using the previously described template matching algorithm to the given region. The range of 128-1000 m is used for the template matching algorithm, as craters with smaller diameters would result in a large number of false positive detections. Previously, an optimal cross-correlation threshold of 0.6568 had been determined for this area by ([PERSON] et al., 2016). Applying Template Matching to the reference area with this threshold resulted in an age estimate of 3.35 Ga. This previous threshold was determined based on automatic crater detection results without any consideration of changes due to automatic removal of secondary crater candidates.
A new template matching threshold value of 0.6525 has been computed based on application of the template-based automatic detection by minimizing the age difference between the age obtained from the reference data and the age resulting from template matching combined with secondary crater removal. The resulting threshold is not far away from the original threshold.
To obtain the best clustering threshold, we redistributed the detected craters 1000 times, and the Voronoi tessellation and Voronoi areas for every crater were computed for each iteration. The results of applying our algorithm on the test area are shown in Figure 3.
To illustrate the effect of the clustering threshold value, we have applied our algorithm with different threshold values to the detected craters. Lower threshold values can detect only a very small number of secondary crater candidates, and is impact on the age estimate is insignificant.
Both SCD algorithms (with and without application in bins) correspond well on the number of craters with diameters less than 150 m, while in the range of 150-180 m the number of automatically detected crater is significantly lower without application in bins. The binned SCD algorithm detects more craters in the larger diameter range, which would probably be considered primary craters by a human expert (Figure 4). Splitting craters in bins by diameter shows an increase in the detection of secondary craters, although most craters with diameters exceeding 500 m due to their more irregular
Figure 3: Visualized results of applying SCD algorithm to Kaygway TC image data of the floor of lunar crater Tsiolkovsky. Red areas correspond to detected secondary crater candidates, green areas to detected primary crater candidates.
Figure 2: AMA obtained based of the craters in the reference region in the diameter range 128–1000m.
distribution resulting from their relatively small number. Hence, the diameter intervals with centre diameters exceeding 500 m were excluded from the AMA estimation applying the binned SCD algorithm (Figure 5).
The final AMA results are displayed in Figure 5. The binned SCD algorithm shows an age estimate which is closer to the reference value than the SCD algorithm applied to all craters.
## 8 Results for non-reference areas
For further evaluation for our method, three larger areas on the crater floor of Tsiolkovsky have been analysed with our algorithm as shown in Figure 6. Each selected region has an area of around 2700 km\({}^{2}\)(52 km by 52 km). Unfortunately there are no reference data for these regions except the small part manually counted by ([PERSON] et al., 2015). This part belongs to region A and, consequently, age estimates have to be similar. Because these regions represent parts of the maze-flood crater floor of Tsiolkovsky, the previously determined optimal threshold value was applied. We used a 600 by 600 pixel window for constructing crater density and age maps with a step width of 10 pixels.
Three maps were plotted in Figure 7, representing the densities of detected craters before and after removal of secondary candidates. There are visible fluctuations of crater density on the maps obtained without SCD algorithm. Applying the SCD algorithm reduces the crater density fluctuations especially for the binned version.
Although the algorithm has a relatively strong effect on the crater density, the SCD algorithm without bins only has a weak effect on the age estimate (around 0.2 Ga), while removing more than 10% of the craters. The estimated ages of the regions before removing secondary candidates range from 3.29 to 3.44 Ga without SCD and from 3.14 to 3.37 Ga with binned SCD (Table 1).
Figure 4: Top: Craters detected in Kagya TC image data by the SCD algorithm (top) and the binned SCD algorithm (bottom).
Figure 5: AMA estimation for the reference area, obtained using the template making algorithm without SCD (top), with SCD algorithm without bins (middle) and with binned SCD algorithm (bottom).
## 9 AGE MAP of Tsiolkovsky
By using the template matching algorithm and the threshold value of 0.6568, the following age map was obtained for the floor of Tsiolkovsky as shown in Figure 8. A 900 by 900 pixel window was used for constructing all age maps with a step width of 10 pixels. The area surrounding crater Tsiolkovsky consists of rough highland surface that cannot be taken into account because the template matching threshold has been optimized for the flat basaltic lava surface of Tsiolkovsky's floor.
Although the Tsiolkovsky floor region looks homogeneous, patches with a much higher age are clearly visible in Figure 8. Their estimated age is around 3.7 Ga, which is significantly
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Method & A & B & C \\ \hline template & 3.297\(\pm\)0.002 & 3.446\(\pm\)0.003 & 3.371\(\pm\)0.001 \\ \hline with SCD & 3.271\(\pm\)0.002 & 3.435\(\pm\)0.0004 & 3.353\(\pm\)0.001 \\ \hline with & & & \\ binned & 3.149\(\pm\)0.004 & 3.37\(\pm\)0.0002 & 3.249\(\pm\)0.001 \\ SCD & & & \\ \hline \end{tabular}
\end{table}
Table 1: Age estimates for test regions A, B and C.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline & A & B & C \\ \hline
\begin{tabular}{c} centers detected \\ without SCD \\ \end{tabular} & 17571 & 16292 & 17938 \\ \hline with SCD & 15349 & 14341 & 15633 \\ \hline with SCD & (87\%) & (88\%) & (87\%) \\ \hline with binned SCD & (14441 & 13430 & 14651 \\ (82\%) & (82\%) & (81\%) \\ \hline \end{tabular}
\end{table}
Table 2: Number of detected craters in test regions A, B and C.
Figure 6: Test regions A, B and C in crater Tsiolkovsky overlaid on LROC WAC mosaic image ([PERSON] et al., 2011).
Figure 7: Effect of removal of secondary candidates on the crater densities of test regions A, B and C. First row: Detected craters with template matching. Second row. Crater density after applying the SCD algorithm. Third row. Crater density after applying the binned SCD algorithm.
higher than the results of age estimation based on manually counted craters ([PERSON] et al., 2015). It is to be expected that these age fluctuation occur as a result of distortion of the CSFD of these areas by secondary craters.
We have applied the template matching algorithm combined with the SCD algorithm using a local threshold derived for each image patch and using a global threshold value as a mean of the threshold values of regions A, B and C. The global threshold produced more consistent results.
After applying the SCD algorithm, we obtained two new age maps as shown in Figure 9 and Figure 10. The first age map (Figure 9) slightly differs from the original age map of Figure 8. There is some reduction in age within high-age regions but the overall effect on the age map is almost invisible.
In the age map of Figure 10, the binned SCD algorithm has been applied based on the same local threshold value. The map shows visible changes in the overall age. Especially, the age of the regions with high ages in the original map of Figure 8 is significantly reduced. The binned SCD algorithm has a strong effect on the densely crated areas but also some effect on the areas not affected by the SCD without bins.
All in all, application of the binned SCD algorithm results in an age map that strongly reduces the fluctuations in age of the geologically homogeneous surface of the floor of the crater Tsiolkovsky.
## 10 Conclusion and Future Work
The SCD algorithm for removing secondary crater candidates from the CSFD has been presented and applied to the floor region of the lunar crater Tsiolkovsky. It is based on the statistical analysis of the Voronoi diagram of the detected craters. In its binned version, the SCD algorithm results in an increased homogeneity of the constructed age map and eliminates local areas of significantly increased apparent age which are characterised by clustered craters. Because of the ambiguous nature of secondary craters, there is no definitive way to validate the actual origin of those craters. Due to the lack of reference data for the whole crater floor region, our algorithm could not be tested more rigorously.
Our method does not guarantee the detection of all secondary craters because secondary craters may also be distributed in an unclustered way ([PERSON] et al., 2001). This means that any algorithm that depends on the detection of unusual spatial crater distributions as a criterion for secondary craters will not be able to detect them completely. Nevertheless, the secondary crater fraction estimated with the binned SCD algorithm of between 12% and 18% is consistent with the estimated range between 5% and 25% of ([PERSON] et al., 2009) for surfaces of similar age and craters of similar diameter on Mars. Furthermore, the result of the binned SCD algorithm bears a high plausibility because it eliminates the spurious high-age anomalies which are apparent without secondary crater removal.
## References
* A Survey of a Fundamental Geometric Data Structure. ACM Computing Surveys 23(3), pp. 345-404.
* [PERSON] et al. (2005) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], 2005. Morphology and geological structure of the western part of the Olympus Mons volcano on Mars from the analysis of the Mars Express HRSC imagery. Solar System Research, 39(2), pp. 85-101.
* [PERSON] et al. (2007)
Figure 8: Age map of the floor of crater Tsiolkovsky using the CSFD obtained by the template matching method with an optimal threshold value, without the SCD algorithm.
Figure 10: Age map of the floor of crater Tsiolkovsky using the CSFD obtained by the template matching method with an optimal threshold value in combination with the binned SCD algorithm.
Figure 9: Age map of the floor of crater Tsiolkovsky using the CSFD obtained by the template matching method with an optimal threshold value in combination with the SCD algorithm without bins.
[PERSON], [PERSON] [PERSON], [PERSON], [PERSON] and [PERSON], 2001. Pwyll secondaries and other small craters on Europa. Icarus, 153(2), pp. 264-276.
* [2][PERSON], [PERSON] and [PERSON], 2005. Secondary craters on Europa and implications for cr-tered surfaces. Nature, 437(7062), pp. 1125-1127.
* [3][PERSON], 2009. Template matching techniques in computer vision. Wiley Publishers.
* [4][PERSON] and 16 coauthors, 2007. Lunar Reconnaissance Orbiter Overview: The Instrument Suite and Mission. Space Sci. Rev., 129(4), pp. 391-419
* [5][PERSON], 2003. Spatial point pattern analysis by using Voronoi diagrams and Delaunay tessellations-a comparative study. Biometrical Journal, 45(3), pp. 367-376.
* [6][PERSON], [PERSON], [PERSON] and [PERSON], 2016. Crater detection via convolutional neural networks. arXiv preprint arXiv:1601.00978.
* [7][PERSON] and [PERSON], 2013, September. Generative template-based approach to the automated detection of small craters. In European Planetary Science Congress (Vol. 8).
* [8][PERSON], 1984. Bidirectional reflectance spectroscopy: 3. Correction for macroscopic roughness. Icarus, 59(1), pp. 41-59.
* [9][PERSON], 2002. Bidirectional reflectance spectroscopy: 5. The coherent backscatter opposition effect and anisotropic scattering. Icarus, 157(2), pp. 523-534
* [10][PERSON], [PERSON], [PERSON] and [PERSON], 2014, April. Detection abilities of secondary craters based on the clustering analysis and Voronoi diagram. In European Planetary Science Congress 2014, EPSC Abstracts, Vol. 9, id. EPSC2014-119 (Vol. 9).
* [11][PERSON], [PERSON] and [PERSON], 2011. Map-projection-independent crater size-frequency determination in GIS environments--New software tool for ArcGIS. Planetary and Space Science, 59(11), pp. 1243-1254.
* [12][PERSON] and [PERSON], 2006. The importance of secondary cratering to age constraints on planetary surfaces. Annu. Rev. Earth Planet. Sci., 34, pp. 535-567.
* [13][PERSON] and [PERSON], 2010. Planetary surface dating from crater size-frequency distribution measurements: Partial resurfacing events and statistical age uncertainty. Earth and Planetary Science Letters, 294(3), pp. 223-229.
* [14][PERSON], [PERSON], [PERSON], [PERSON] and [PERSON] [PERSON], 2012. Planetary surface dating from crater size-frequency distribution measurements: Spatial randomness and clustering. Icarus, 218(1), pp. 169-177.
* [15][PERSON], 1983. Meteorite--bombardement und datierung planetarer oberflachen. Habilitation Dissertation for Faculty Membership, Ludwig Maximilianis Univ.
* [16][PERSON], [PERSON] [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], 2008. Performance and scientific objectives of the SELENE (KAGUYA) Multiband Imager. Earth, planets and space, 60(4), pp. 257-264.
* [17][PERSON], [PERSON] and [PERSON], 2015. Small-scale lunar farside volcanism. Icarus, 257, pp. 336-354.
* [18][PERSON] and [PERSON], 2014. The secondary crater population of Mars. Earth and Planetary Science Letters, 400, pp. 66-76.
* [19][PERSON] and [PERSON], 2010. Method for crater detection from Martian digital topography data using gradient value/orientation, morphometry, vote analysis, slip tuning, and calibration. IEEE Transactions on Geoscience and Remote Sensing 48(5), pp. 2317-2329.
* [20][PERSON], [PERSON] [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], [PERSON] [PERSON] and [PERSON] [PERSON], 2016. Mapping of planetary surface age based on crater statistics obtained by an automatic detection algorithm. International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences, vol. XLI-B4.
* [21][PERSON] [PERSON], [PERSON] and [PERSON], 2006. Automated detection and classification of lunar craters using multiple approaches. Advances in Space Research, 37(1), pp. 21-27.
* [22][PERSON], [PERSON], [PERSON] [PERSON], 2003. Best practices for convolutional neural networks applied to visual document analysis. Proc. 12 th International Conference on Document Analysis and Recognition.
* [23][PERSON], 1965. Preliminary Analysis of the Fine Structure of the Lunar Surface in Mare Cognitive. In International Astronomical Union Colloquium, vol. 5, pp. 23-77.
* [24][PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], 2010. Initial observations from the lunar orbiter laser altimeter (LOLA). Geophysical Research Letters 37(18).
* [25][PERSON], [PERSON], [PERSON], 2011. Lunar Reconnaissance Orbiter camera global morphological map of the Moon. Lunar and Planetary Science Conference XXXXII, abstract #2387. Data download from [[http://wms.lroc.asu.edu/lroc/view_rdr/WAC_GLOBAL](http://wms.lroc.asu.edu/lroc/view_rdr/WAC_GLOBAL)]([http://wms.lroc.asu.edu/lroc/view_rdr/WAC_GLOBAL](http://wms.lroc.asu.edu/lroc/view_rdr/WAC_GLOBAL))
* [26][PERSON], 1988. Age dating of mare in the lunar crater Tsiolkovsky by crater-counting method. Earth, Moon, and Planets, 42(3), pp. 245-264.
* [27][PERSON] and [PERSON], 1982. Analysis of crater distributions in mare units on the lunar far side. Earth, Moon, and Planets, 27(1), pp. 91-106.
* [28][PERSON], [PERSON] and [PERSON], 2009. Theoretical analysis of secondary cratering on Mars and an image-based study on the Cerberus Plains. Icarus, 200(2), pp. 406-417.
* [29][PERSON], [PERSON], [PERSON] and [PERSON], 2013. A novel method of crater detection on digital elevation models. Proc. IEEE International Geoscience and Remote Sensing Symposium, pp. 2509-2512.
|
isprs
|
AUTOMATIC DETECTION OF SECONDARY CRATERS AND MAPPING OF PLANETARY SURFACE AGE BASED ON LUNAR ORBITAL IMAGES
|
A. L. Salih, A. Lompart, A. Grumpe, C. Wöhler, H. Hiesinger
|
https://doi.org/10.5194/isprs-archives-xlii-3-w1-125-2017
| 2,017
|
CC-BY
|
isprs/4c54ad54_3eab_4288_ac09_afd32a478cd1.md
|
# Mobile Laser Scan Data for Road Surface Damage Detection
###### Abstract
Road surface anomalies affect driving conditions, such as driving comfort and safety. Examples for such anomalies are potholes, cracks and travelling. Automatic detection and localisation of these anomalies can be used for targeted road maintenance. Currently road damage is detected by road inspectors who drive slowly on the road to look out for surface anomalies, which can be dangerous. For improving the safety road inspectors can evaluate road images. However, results may be different as this evaluation is subjective. In this research a method is created for detecting road damage by using mobile profile laser scan data. First features are created, based on a sliding window. Then K-means clustering is used to create training data for a Random Forest algorithm. Finally, mathematical morphological operations are used to clean the data and connect the damage points. The result is an objective and detailed damage classification. The method is tested on a 120 meters long road data set that includes different types of damage. Validation is done by comparing the results to a classification of a human road inspector. However, the damage classification of the proposed method contains more details which makes validation difficult. Nevertheless does this method result in 79% overlap with the validation data. Although the results are already promising, developments such as pre-processing the data could lead to improvements.
Ingravette Monitoring, Road Pavement, Potholes, Mobile Laser Scanning, Damage Detection +
Footnote †: This contribution has been peer-reviewed.
[PERSON]\({}^{1}\), [PERSON]\({}^{1}\), [PERSON]\({}^{2}\)
\({}^{1}\) Department of Geoscience & Remote Sensing, Delft University of Technology, Delft, Netherlands -
[PERSON], [PERSON]
\({}^{2}\) Iv-Infra, Haarlem, Netherlands - [PERSON]S]
## 1 Introduction
Road damage detection is important to determine road safety and road maintenance planning. The damage of the road surface, like potholes, cracks and travelling, affects driving conditions such as driving comfort and safety and increases fuel consumption, traffic circulation and noise emission. Localisation of these damage can be used for targeted road management and maintenance, which contributes to an improvement of driver safety and comfort ([PERSON] et al., 2014).
The traditional method for road condition surveying is that inspectors drive slowly on the road looking out for road surface damages and stop the vehicle when damage is found, do measurements on the damage and mark visually. This is dangerous, time-consuming and costly ([PERSON], 1998; [PERSON] et al., 2007). For improving the safety road inspectors can evaluate road images. The results are, however, susceptible to human subjectivity.
Iv-Infra has a mobile mapping car, shown in Figure 2, including 3 laser scanners, 10 cameras for 360\({}^{\circ}\) photos, a GPS and an Inertial Measurement Unit. This system has been implemented successfully for lamp post identification. This paper is an attempt to study the feasibility of using such a system for road damage detection. In this research a method for road damage detection is developed using laser scan data of one of the three laser scanners of the car, a Z+F PROFILER 9012A. This laser scanner is mounted at the rear of the vehicle such that its profile lines are perpendicular to the driving direction. It measures the range and the intensity, along the profile.
There are several advantages of such a system, for example no road closure is needed for manual road inspection, which increases safety and decreases costs. When the damage detection can be done automatically no differences due to subjective judgement are obtained.
This paper is structured as follows. In the following section advantages and disadvantages of some of the popular alternatives to manual road condition survey will be discussed. Some details about the measurement car and research area will be given in section 3. The methodology will be explained in section 4. In section 5, results of this method will be given and finally the conclusion is presented in the last section.
## 2 Background
Several methods have been developed to collect data of a road surface and determine damage from such data. The methods can be classified based on how the road surface information is acquired. This can be vibration, image and laser scanning-based methods. As this study focuses on investigating the feasibility of laser scanning in detecting potholes, travelling, crack and craquelure (Fig. 1), these definitions of these damage are first presented, and then existing methods for damage detection are investigated.
Potholes are bowl-shaped holes with various sizes involving one or more layers of the asphalt pavement structure. Size and depth can increased whenever water accumulates in the hole ([PERSON] & [PERSON], 2017). They arise due to freezing of water in a soil, which results in expanding of the space. Thawing of the soil can weaken the road surface while traffic can break the pavement resulting in potholes.
Ravelling is dislodging of aggregate particles due to influences of traffic, weather and obsolescence of the binder ([PERSON] & [PERSON], 2017; [PERSON] & [PERSON], 2017)Due to traffic load, freezing and expanding of water in asphalt, cracks can be formed. Two types of cracks (longitudinal and transverse cracks) were considered in this study. Longitudinal cracks are cracks parallel to the road, while transverse cracks are perpendicular to the road.
Craquelure are cracks, which develop into many-sided, sharp angled pieces. This damage develops at the end of the structural life of an asphalt pavement, (Bouwend Nederland and emulsie asfaltebeton, n.d.). Craquelure at the outer 0.25 m of the pavement is named as boundary damage.
Next, a survey of techniques for data capture and methods for processing data to determine road surface damage are present.
### Vibration based methods
Accelerometers, microphones and tire pressure sensors are used to measure vibrations caused by pavement elevation differences and roughness. Accelerometers in mobile phones can measure the relative movement of the car in three dimensions. Examples are the Pothole Patrol by [PERSON] et al. (2008) and Wolfine by [PERSON] et al. (2012). Filters and machine-learning approaches are used to detect road damages. A disadvantage of this data acquisition method is that the relative movement of the car is only influenced by the small contact area between the road surface and the four tires. So only a small parts of the road surface along the wheel paths can be analysed.
### Image based methods
There are also methods collecting images from scanning, line-scan and video cameras of the road surface, which can be used for detecting the damage. An example is the automated detection system RoadCrack, created by the Australian Commonwealth Scientific and Industrial Research Organization (CSIRO, n.d.). This system is based on high speed cameras mounted underneath the vehicle. These cameras collect high resolution images of small patches of the pavement surface and they are consolidated into bigger images of half-metre intervals. CSIRO (n.d.) stated that the system can detect cracks in a millimetre order, while driving up to 105 kilometres per hour. This is done fully-automated with a combination of machine vision and artificial intelligence (CSIRO, n.d.). Another system based on laser based imaging is the Digital Highway Data Vehicle (DHDV) from Naylink (n.d.). They use their Automated Distress Analyzer (ADA) which produces crack maps in real time.
RoadCrack and DHDV are two commercial systems, which use cameras as one of their acquisition methods. There are several more commercial systems, most of which have not published details on their algorithm.
### Laser scan based methods
One of the advantages of using laser scanning sensors is that 3D topographic of the road surface can be captured highly accurately and quickly. [PERSON] et al. (2014) used mobile laser scanning (MSL) data to detect road markings. From MSL data, they create intensity images, which they used in a point-density-dependent multi-threshold segmentation method to recognise road markings.
Pavemertic inc. developed the Laser Crack Measurement System (LCMS), which consists of two high performance 3D laser profilers and a camera as detector, in cooperation with government and research partners ([PERSON] et al., 2014). This system measured range and intensities, and produced 2D and 3D data.
[PERSON] et al. (2007) developed a system using a SICK LMS 200 laser scanner for reconstructing the 3D surface model, cracks in smaller regions can be identified from a variation of the 3D depth measurement.
[PERSON] (2011) used a low cost \"laser line striper\" to evaluate the unevenness of the road with a step-operator to detect road damage. Based on the number of the data points in one line, significant road damage is found. However, noise data can trigger the larger number of the points in the line, which lead to incorrect damage to be detected.
## 3 Data
### Data acquisition system
As mentioned in section 1, Iv-Infra has a measurement car with 3 laser scanners, 10 cameras for 360\({}^{\circ}\) photos, 3 HR cameras in the bumper, a GPS and a IMU system (Fig. 2).
Figure 1: Examples of road surface damage.
Figure 2: Measurement car
In this research, the data from one scanner, the Z+F PROFILER 9012A is used. This is a profile scanner using the the phase-shift method for measuring the range. An outgoing laser beam is intensity-modulated by a sine-wave signal. This signal is reflected back from an object and the received intensity pattern is compared with the original transmitted signal. A phase-shift in the modulated signal is caused by the travelling time of light forth and back to the measured object. The phase measurement can be transformed directly into a distance/range: \(d=\frac{e}{a^{2}f}\), with \(c\) the speed of light (with atmospheric corrections) in m/s, \(f\) the modulated frequency in Hz.
The profile scanner produces measurement points with x, y, z coordinates. Each measurement point (x,y,z) is geo-referenced by the QINSy (Quality Integrated Navigation System) software (Quality Positioning Services B.V, 2018) such that the IMU, the GNSS locations, the vehicle odometer, the intensity and range are taken into account. This is done in the Dutch coordinate system, RD-coordinates. The z component is given in Normal Amsterdam's Peil (NAP), the Dutch height reference. Each measurement point contains the following data fields: intensity, range, profile number and beam number. The intensity is the amount of reflected light, which has no clear unit. The range is the distance between the scanner and a hit point on the object surface and it is given in meters. When the laser beam hits multiple \"targets\" of different heights, for example when the laser beam partly hits the road surface and partly falling into a crack, the laser scanner will detect a combination of multiple reflections, one for each target. Unfortunately phase-based ranging devices can never discern all the single vectors but only measure the resultant vector; the geometrical sum of all vectors. So the resultant range is a mixture of the distances to the surface and into the crack ([PERSON] (Zoller + Frohlich GmbH), 2019).
A profile number is given to each new line which the profiler measures. A new profile starts nadir and the laser beam turns anticlockwise, see Figure 3. The beam numbers are given to each consecutive point in each profile. In this project, the laser scanner is configured such that each profile (\(360^{\circ}\)) contains 5100 points (beams), with a spindle speed of 200 rotations per second (profiles). When the car is driving, a spiral pattern is formed, illustrated in Figure 3. The distance between each profile depends on the car velocity and the spindle speed of the laser scanner. In this case, this results in a distance of 4 cm between the profiles while driving 30 km/h and 14 cm at 100 km/h. The point spacing along the profile is approximately 3 mm on the road in nadir direction and does not depend on driving velocity, but on range.
### Research area
A road section of the R106 near Haarlem city, the Netherlands, is selected for a pilot survey. This is a touristic and quiet road where 6 driving speed is between 30 and 50 km/h. On this road, 36 road damages are found by a road inspector from a third party and are categorised as 2 travelling, 7 craquelure, 2 potholes, 8 longitudinal and 11 transverse cracks and 6 boundary damage, ([PERSON], 2019). Figure 4 shows the damage of the road as classified by a road inspector. For this paper a subset of around 120 meters of road is used, which includes 6 million points, given in Figure 4. Road sections 1 and 2 are evaluated extensively in Section 5.5.
### Data selection
The laser beam width defines which sizes of damages can be measured. A large beam is more likely to hits multiple \"targets\" which results in a resultant vector. Therefore, it was decided to use only beam widths smaller than 5 mm for this research. To avoid that the beam widths are larger than 5 mm, the theoretic intersection of the laser beam with a horizontal was calculated based on trigonometric properties. For this laser scanner the beam divergence is 0.5 mrad and it has a beam diameter of 1.9 mm (at 0.1 m distance) (Zoller + Frohlich GmbH, n.d.). In Figure 5 it can be seen that at 42 degrees the beam width is below 5 mm, so this is taken as the boundary angle. This results in around 600 beam numbers on each side of the nadir, when there are 5100 in one profile.
## 4 Methodology
To identify damage of the road surface from MLS data, the proposed workflow includes (I) feature creation, (II) K-means clustering to create training data set, (III) Random Forest classification and (IV) Mathematical morphological operations to reduce small damage points and connect larger damage points.
Figure 4: Part of the research area with locations of damages found by a road inspector. The black squares show the locations which are discussed in more detail.
Figure 3: Scanning pattern of the profile. Each new colour represents a new profile number. The driving direction is marked with an arrow. The zoomed in section gives how the sliding window algorithm is used.
### Step I: Feature creation
Various independent features are made with a sliding window algorithm. A sliding window algorithm means that a window with size L is moving along the points, in this case along the profile, see figure 3. In this research the results of the calculation of a window are given to the centre point of that window. Notably, feature values of the points strongly depend on the window size. An overview of the six different features and their calculations is given.
#### 4.1.1 Deviation from the mean
The first feature is to calculate the absolute elevation deviation of the centre point from the mean of a window of length L. This can also be written as:
\[\Delta Z=\mid Z_{\frac{L+1}{2}}-\frac{1}{L}\sum_{i=1}^{L}Z_{i}\mid, \tag{1}\]
where \(Z=\) the point data and \(L=\) the number of the points within the window.
This can be interpreted as the surface roughness, which can be defined as the irregularities in the surface texture which are inherent in the production process and wear (Taylor Hobson Limited, 2011).
This feature can also be calculated for height values as well as with the intensity values.
#### 4.1.2 Difference to the surrounding points
Another method is to take the difference with the centre point of a window and the neighbouring points along the profile. This can written as:
\[(Z_{\frac{L+1}{2}}\cdot L)-\sum_{i=1}^{L}Z_{i}. \tag{2}\]
This feature can be used with the height values as well with the intensity values.
#### 4.1.3 Standard deviation of range
In this feature the range is calculated for each beam number when the road would be horizontal and flat. This is done by calculating the angle which each beam number should have, by taking the fraction of the beam number by the maximum beam number times 360\({}^{\circ}\). The range is then calculated by the height of the scanner divided by the cosine of the above calculated angle. This calculated range is subtracted from the measured range. This is done because the range may vary with the angle. Over a window with length 20 points the standard deviation is taken over the range difference.
#### 4.1.4 Standard deviation of number of points
With CloudCompare ([PERSON] et al., 2017) the number of neighbours inside a sphere of radius R are calculated for each point. In this case a radius of 0.02 metre is taken. Here the standard deviation is also taken over a window of length 20 points.
#### 4.1.5 Sum of different windows
For the deviation from the mean and difference with surrounding points different window sizes can be used to calculate the feature. The results of different window lengths are added as a separate feature a new feature is created.
### Step II: K-means clustering
In this step K-means clustering is used to create a training data set for the Random Forest classification. K-means clustering, ([PERSON] & [PERSON], 1979), divides M points in N dimensions into K clusters so that each point belongs to the cluster with the closest centroid.
In this study, K-means clustering is used to classify a small selection of the data with known damage into two clusters (\"no damage\" and \"damage\").
But before the clustering is done, each feature is scaled. This is done by first subtracting the mean value, and scale it by the standard deviation of the feature.
Both scaling and clustering are done with the python scikit-learn module ([PERSON] et al., 2011).
### Step III: Random Forest classification
After clustering, the small training data set can be used for training the Random Forest algorithm. Random Forest Classification is a supervised classification method, based on classification trees ([PERSON] et al., 2002). A classification tree is a multistage approach which breaks up a complex decision into a union of several simpler decisions ([PERSON] & [PERSON], 1991). Each node in a tree makes a binary decision, and multiple decisions in a tree lead to a class label. This is done by dividing the small training data set in 3 parts, and use 1 part for training the algorithm.
In this research, the RandomForestClassifier from the scikit-learn module is used ([PERSON] et al., 2011). After training, the whole data set is classified by using this random forest classifier.
### Step IV: Mathematical morphological operations
With the Python scikit-image ([PERSON] et al., 2014) morphology module, objects smaller than 3 points are removed as a first step. This is done by projecting the data as a matrix with the number of profiles as rows and the number of beams as columns. After the small objects are removed, morphological
Figure 5: Effect of the angle of the beam on the beam width. A larger angle causes a larger beam width.
closing is used. Mathematical morphological operations assign pixels in an image based on the values of neighbouring pixels. Mathematical morphological closing is a combination of dilation followed by an erosion operation ([PERSON], 1997). Dilation is an operation, which changes a \"no damage\" pixel into a \"damage\" pixel when a neighbouring pixel is classified as \"damage\". Erosion is the opposite operation of dilation. Erosion gives \"damage\" pixels a \"no damage\" value when the neighbouring pixels are classified as \"no damage\". Erosion decreases objects, while dilation increases objects, and can merge multiple objects into one ([PERSON], 1997). Mathematical morphological closing removes gaps in connected damage pixels. As neighbouring pixels a + shape of 1 centre point is used.
## 5 Results
In this section, results from the proposed method are presented. Furthermore were the results validated based on the classification of a road inspector from a third party with no connection to this project.
### Features
For this research 22 features are calculated as described above. For the deviation from the mean and difference to the surrounding points four different window sizes (\(5,21,41\), \(101\)) are used. Figure 6 illustrates the correlation between the features. It is clear that there is a large correlation between different window sizes of the same feature.
Examples of different features for road section 1, can be found in Figure 8.
### K-means clustering
K-means clustering is done on road section 1. Results for this road section are given in Figure 9. From this figure it can be seen that large longitudinal cracks are classified as damage, while the transverse cracks are not detected.
### Random Forest
The centre figure of Figure 9 gives the results of the Random Forest classification and morphological operations for road section 1, and Figure 10 gives the road damage classification for the whole research area. Figure 9 shows that less small objects are present in compare with the K-means clustering result. The transverse crack is as well not detected as damage.
### Validation
The validation of the above described method is done with the help of damage shapefiles of a road inspector from a third party. The shapefiles are three files with point, line and polygon data. These data files are converted to raster data with the GDAL (GDAL/OGR contributors, 2018) tool gdal_rasterize. This tool rasterizes the shapefile (vector geometries) with a pixel size of \(0.05\times 0.05\) meter. Then the three raster files are combined to one large raster file.
This validation data is projected to the point data, such that each point gets a damage value. The areas of connected damage points are calculated, such that orthogonal and diagonal point neighbours are included. This is also done for the method data. Through rasterising the shapefiles, some pixels are no longer connected to each other, such that the number of damage areas are increased. This results in 153 connected road damages instead of 20 damages. The results of the method contains 3512 damage areas, most of them are smaller than 30 points. The distribution of the damage areas (below 30 points) for the validation data and the method are given in Figure 12. Here it can be seen that there is a large amount smaller damages detected for the described method, and less for the road inspector. When the larger areas (\(>\)30 points) are compared, there are 139 damage areas for the method and 62 validation damage areas.
When the intersection of union should be calculated, this would result in a low number. The intersection of union can be calculated by the area of overlap divided by the area of union. This can explained by the large and rough damage areas of the road inspector. The area of union is large, while the classified damage of the method are detailed and relative small. So for calculating the intersection of union another more detailed validation data is needed. This can be done by taking orthogonal photos of the road and take the road inspector's classification as guide.
When only method damage points are compared with validation method points, 79% of the method points are right classified as damage. However, the question is whether the false positives really are false positives.
### Cases
In this section, two cases are discussed in detail, road sections 1 and 2 in Figure 4.
In road section 1 (Fig. 10, between profile numbers 19360 and 19600), the proposed method classified only parts as damage, while the road inspector (Fig. 11) the whole area classified as damage.
Figure 8: Examples of features. A: standard deviation of the numbers of neighbours, B: standard deviation of the range difference, C: deviation from the mean of Intensity with window size 41, D: Sum of different windows of the difference to the surrounding points
Figure 9: Results, left: the K mean classification, centre: the method classification, right: a photo of the damage
Also transverse cracks are difficult to detect with this method. The change that they are not detected is large, because the spacing between profiles is relative large, especially when the driving speed is high.
## 6 Conclusion and Recommendations
### Conclusion
This paper presented a possible technique for detecting road damages with a Z+F PROFILER 9012A laser scanner mounted on a mobile mapping system. This is done by making features with a moving window method. Then K-means clustering is applied to create training data for a Random Forest algorithm. After that classification mathematical morphological operations are used to remove small objects and connect points which are close to each other. Validation is done with the help of a road inspector's classification. Although this validation data is too rough to calculate the intersection of union on areas, when points are compared to each other 79% of the points are correct classified as damage. However, more research is needed for analysing the false positives.
Also road markings are classified as damages, probably due to the high reflectivity and due the fact that road markings are elevated with respect to the road which results in a deviation when compared to the surrounding road surface.
Due to the spacing between profile lines, the probability is large that transverse cracks are not detected.
### Recommendations
To create a higher level of accuracy for this method pre-processing of the point data is needed in order to remove road markings
Figure 11: Road damage classification of the road of the road inspector
Figure 12: Damage area distribution of validation and method damages.
Figure 10: Road damage classification of the road with the method
for example. To remove the K-means classification for extracting training data for the Random Forest algorithm self made training data can be used. Also a new and more detailed road classification can be used for validation. This can be done by making a detailed road damage classification by help of orthogonal road photos. Further, a look at the false positives is needed and a confusion matrix can be used to distinguish which damages types can and can not be recognised well. A next step in this research could be to identify different types of damages.
## References
* [1][PERSON], [PERSON], [PERSON], and [PERSON] (2012) Wolevrine: traffic and road condition estimation using smartphone sensors. Fourth International Conference on Communication Systems and Networks (COMSNETS 2012). Cited by: SS1.
* [2][PERSON], [PERSON], [PERSON], and [PERSON] (2012) Wolevrine: traffic and road condition estimation using smartphone sensors. Fourth International Conference on Communication Systems and Networks (COMSNETS 2012). Cited by: SS1.
* [3][PERSON], [PERSON], [PERSON], and [PERSON] (2012) Wolevrine: traffic and road condition estimation using smartphone sensors. Fourth International Conference on Communication Systems and Networks (COMSNETS 2012). Cited by: SS1.
* [4][PERSON] and [PERSON] (1998) Automatic pavement distress detection system. Information Sciences108, pp. 219-240. Cited by: SS1.
* [5][PERSON].
|
isprs
|
MOBILE LASER SCAN DATA FOR ROAD SURFACE DAMAGE DETECTION
|
B. B. van der Horst, R. C. Lindenbergh, S. W. J. Puister
|
https://doi.org/10.5194/isprs-archives-xlii-2-w13-1141-2019
| 2,019
|
CC-BY
|
isprs/b6123c78_aeae_4ac0_8208_774021043430.md
|
# 2D/3D Soil Consumption Tracking in a Marble Quarky District
[PERSON]
Corresponding author1 Agenzia Regionale per la Protezione Ambientale della Toscana (ARPAT)
1
[PERSON]
1
[PERSON]
1
[PERSON]
1
[PERSON]
1
1 Agenzia Regionale per la Protezione Ambientale della Toscana (ARPAT)
(c.licciardello, a.dimarco, s.biagini, d.palazzuoli, k.tayeh)@arpat.toscana.it
###### Abstract
Complex extractive districts, such as the marble quarries in the Apuan Alps (northern Italy), require soil consumption monitoring over the years that could be achieved through high-resolution remotely sensed data. To derive 2D and 3D indicators with appropriate resolution for annual monitoring of high-resolution changes in soil consumption, aerial images, LiDARA acquisitions, satellite data, and Remotely Piloted Aircraft Systems (RPAS) acquisitions were used. In particular, open-access Sentinel-2 multispectral satellite imagery with a spatial resolution of 10 m was used to assess cover changes (2D), and then refined by manual interpretation for 5 years (2016-2021). 3D changes were detected by comparing free aerial LiDAR data from 2009 and 2017, integrated with two stereo models obtained from Pleidas high-resolution satellite images from 2020 and 2022. 3D changes observed over the years by algebraic elevation comparison, performed in QGIS 3.x environment, highlight quarries characterized by intense mining activities (extracted marble blocks, characterized by positive elevation differences) and quarry area management (debris disposal and service infrastructure construction, characterized by negative elevation differences). The combined use of 2D and 3D change indicators can be challenging in order to correctly represent soil consumption over the years. A dual 2D/3D webgs client have been developed for proper representation of 2D/3D spatial indicators of ongoing extraction activities in the Carrara marble basin: high-resolution images have been served as tiled data, while 2D/3D spatial indicators are served as static and/or tiled vector data. Open-Source libraries have used in data processing, serving and representation inside a map interface.
Footnote †: FOOTNOTE:10]Footnote 1: Corresponding author
## 1 Introduction
Industrial quarrying and mining activities have been identified as a major source of environmental impacts caused by mining waste ([PERSON] et al., 2004; [PERSON] et al., 2011; [PERSON] et al., 2019). In particular, with regard to Marble Quary Waste (MQW) and Marble Cutting Waste (MCW, in italian _marmetola_), mining activity within the Carrara industrial basin has caused massive transformations of the landscape, with the growth of waste over very large areas, and the dynamics of karst water in the Apuan Alps has been affected, causing environmental impacts on both groundwater and surface water ([PERSON] et al., 2019; [PERSON] et al., 2021).
Nowadays, monitoring surface and volume changes plays an important role in assessing the current status of waste disposal activity, especially since only a few quarries are allowed to host extractive waste. In order to properly assess the sustainability of the authorized activities, regional and national laws oblige quarry owners to issue a report on MCW/MQW production rates to monitoring agencies: these data are available in the Regional Environmental Information System (SIRA) of the Regional Environmental Agency of Tuscany (ARPAT). As highlighted in [PERSON] et al. (2021), the tracking of extracted volumes disposed in situ for quarries in the Carrara basin over the years is a valuable additional resource for ARPAT control planners, which has allowed them to prioritize environmental controls based on waste management performance and to monitor the achievement of sustainable waste management goals. As part of a project involving ARPAT, several remote sensing methods were used to derive areal and volumetric datasets, which were used to calculate a set of experimental indicators related to the potential impact of MQW/MCW on the entire Carrara industrial basin ([PERSON] et al., 2021). The indicators proposed in this work are targeted at local monitoring of individual quarries, while those already in the literature were limited to monitoring of high-level plans, being mainly based on (a) the life cycle assessment (LCA) method applied to building materials ([PERSON] et al., 2016; [PERSON] et al., 2011), (b) the influence of the adoption of industrial best practices ([PERSON] et al., 2011), (c) other hybrid approaches ([PERSON] et al., 2018), or (d) the restoration of natural habitats ([PERSON] et al., 2018), are mainly aimed at monitoring regional plans that require Strategic Environmental Assessment (SEA) ([PERSON] et al., 2007)
The multidimensional nature of the proposed indicators, which vary over time and space, requires further work to facilitate the use of the indicators by decision makers. With this in mind, the development of appropriate user interfaces, based on dynamic web maps with time-dependent data and 3-D representations highlighting critical areas, plays an important role in planning environmental monitoring.
### Study area
Carran's industrial basin (Fig. 1), located in Apuan Alps (Tuscany, northern Italy), is characterized by an area of about 10.76 km\({}^{2}\)with more than 100 quarries still in activity (Fig. 1). In detail, it is historically divided in the four basins of Miseglia (southem area), Torano (western area), Fanircifetti (central area) and Colonnata (eastern area).
Marble quarrying plays a crucial role in the local economy; in fact, the size of quarries and extraction capacities have increased significantly over the years, thanks in part to the introduction of new extraction technologies such as the use of explosives, cutting wires and cutting machines. This has led to massive waste production, which is often disposed of in neighbouring areas (_ravanetri_) or used to build provincial and service roads (Fig. 1).
Waste from past mining activities has caused major geomorphological changes, mainly due to the removal of marble debris for subsequent exploitation in industrial processes. On the other hand, historic ravines (pre-21 st century processing waste) are evidence of past industrial mining techniques and are now under protection as part of Carrara's industrial heritage. Currently, regional laws prohibit new waste disposals: quarry owners are allowed only temporary disposals, subject to authorization from public authorities.
## 2 Materials and Methods
### Aerial and satellite imagery
Both 2D and 3D land use monitoring was carried out using free aerial and satellite imagery and leveraging Open-Source geospatial software for processing and disseminating data useful for planning and monitoring management. Publicly available Aerial imagery and LiDAR acquisitions and satellite imagery greatly granted by the European Space Agency (ESA) (Tab. 1) were used to derive high-resolution annual change monitoring indicators, with spatial resolution between 50 cm and 1 m in both 2D and 3D domains.
### 2D/3D representations
Since 2D changes must comply with both regional and municipal regulation related to environmental impact assessment (EIA) procedures, accurate selection of spatial datasets has been done aiming to offer an environmental alert system to environmental policy makers and managers. Additional datasets published with the 2D mapping interface includes (a) cadastre layers covering each quarry surface, (b) authorized extractive areas, and (c) debris disposals' coverage.
By intersecting these datasets with both 2D and 3D changes' indicators environmental policy makers and managers can assess environmental management quality of each quarry, thus identifying local mismatch between desired results and (a) debris management, (b) natural soil loss and (c) progress of restoration activities for exhausted quarries.
Historical high resolution BW aerial imagery (1-2m), freely available by WMS OGC regional services, can be used to evaluate long-term dating of debris disposals from 1978 to 2003 (1978, 1988, 1996 and 2003 surveys); high resolution RGB acquisition campaigns, regularly acquired on a 3-years basis starting from 2007, can be used in more recent evaluations. Being valuable tools for extraction authorizations' EIAs, all these layers should be made available in the 2D web interface.
In addition to a 2D view of both 2D/3D indicators, a 3D view can offer to environmental controls' managers and dedicated personnel a user-friendly point of view over quarry area changes. Since point cloud at various resolutions are available for the Carrara basin, coming from both aerial and RPAS surveys, reconstructed mesh publishing has been preferred over point cloud data. While WebGL Point Cloud Viewers as Potree are a valuable solution for publishing large point cloud datasets in an efficient way, since return texture information is a valuable support in terrain changes' identification a mesh-based solution has to be preferred.
Mesh viewers are typically adopted in Cultural Heritage data dissemination; 3 DHOP ([PERSON] et al., 2015), coupled with Nexus Multi-Resolution mesh format ([PERSON] et al., 2005; [PERSON] et al., 2015), both developed by Visual Computing Lab of CNR-ISTI, are widely used in Cultural Heritage applications. A full open-source stack can be established to publish textured meshes starting from point clouds and orthorectired photos: large textured mesh, when assembled in multi-resolutions formats, can be published in a WebGL client-side 3D viewer ([PERSON], 2018). While CloudCompare and/or Meshlab can be used in mesh creation and editing, Nexus command line tools allow supported formats (obj or ply) mesh conversion in multiresolution format.
Finally, 3 DHOP web viewer, fully portable over up-do-date modern browsers with full WebGL support, can be used for multiresolution model publishing, allowing stacking of multiple 3D models over a single web interface (see the Temple of Luni web viewer sample).
## 3 Results
### 2D and 3D changes' indicators
The change in land cover detected between 2009 and 2020 were evaluated using different satellite photos, as shown in the previous paragraph, summarizing the result obtained in Fig. 5. The land cover classes, shown in Tab. 2, were digitized on all available images, limited to quarry areas (Fig. 4). The land cover datasets obtained from the high-resolution interpretation were processed for each survey period.
In summary, changes in land cover occurred mainly in the years between 2013 and 2016. The areas subject to natural soil loss and QWM removals between 2009 and 2020 are higher than those affected by QWM fills. However, natural soil loss between 2017 and 2020 was significantly reduced compared to the interval of years 2009-2016. The detail of these data is shown in a previous work ([PERSON] et al., 2021).
Taking into account the vertical accuracy of the extracted terrain models, maps of relevant elevation changes between the 2017 reference model and the combined 2020 ASP stereo models were made for a number of test areas (4 sites). In detail, a threshold (5 m) was used to highlight areas subjected to intense mining between 2017 and 2020, identifying both areas characterized by negative elevation changes (marble cuts), thus affected by intense mining over the years, and areas with positive elevation changes, corresponding to zones used for temporary waste disposal. In long-data waste disposal, for example, negative and positive elevation changes make it possible to identify new waste disposals and/or the presence of major landslides. The detail of these data is shown in the work of [PERSON] (2021).
### Dual 2D/3D webjs
A 2D prototyping webjs and originally developed at the Tuscan Regional Environmental System for Environmental Open Data Publishing, based on OpenLayers 6.x, integrated with Bootstrap 3.x and jQuery 2.x, has been adapted to host the aforementioned dataset collection: (a) authorized areas, (b) 2D/3D indicators (c) recognized alerts over authorized areas and debris disposals, and (c) historical BV aerial imagery.
A textured mesh of the Aerial LiDAR survey of a Carrara basin has been generated in MeshLab; the resulting mesh has been converted in Nexus multi-resolution compressed file format with _nxshull_ and _rxscompres_ commands. Compressed textured mesh size has been found to be about 5 times lesser than original point cloud size. The resulting model quality was assessed by using the _nxsview_ command.
3 DHOP sample interfaces were adapted for proper visualization of two superimposed 3D models coming from (a) 2012, and (b) 2017 LiDAR surveys.
## 4 Discussion and Conclusions
The combined usage of both 2D and 3D changes' indicators can be challenging in term of proper representation of soil consumption dynamics over the years: while decision makers need a quick and easy access to both 2D and 3D data, web technologies suitable for a proper representation have been developed in very different contexts, making their integration quite complex. While a 'classical' 2D webjs client Openlayers or Leaflet-based can be enough to highlight 2D changes and - with some limitations - 3D changes as elevation differences, a 'true' 3D visualization environment must be set to track ongoing extraction activities aiming to assess both (a) compliance to authorized extraction plans by public bodies and (b) proper debris management in quarry areas. In addition, 3D web viewers are mainly targeted to represents point clouds or CAD drawings, making very difficult the integration of 2D, 2.5D (Terrain Models) and 3D (extracted volumes) data. A dual 2D/3D webjs
Figure 5: Sample site with land cover changes between 2009 and 2020 (1:5,000): relevant change types between 2009 and 2020: (a) Natural Soil Loss (red), Extraction Activities Over Old MQW (orange), New MQW in-situ Disposals Over Inactive Areas (cyan). Black outlines represent quarries’ own properties extents. Background: 2019 orthoimage.
Figure 8: 3D web interface
Figure 6: 2D web interface. Active layers: (a) background: 2017 BW orthoimage (b) foreground: 3D changes between 2017 and 2012.
Figure 7: Point clouds processing chain and final result in Nexus visual interface (multi-resolution textured mesh of Torano extractive basin from 2017 aerial LiDAR survey)
client have been developed for proper representation of 2D/3D spatial indicators of ongoing extraction activities in the Carrara marble basin: high resolution images have been served as tiled data, while 2D/3D spatial indicators are served as static and/or tiled vector data. Open-Source libraries have used in data processing, serving and representation inside a map interface. For each quarry included in the Carrara basing, both area limitations and authorized areas for extraction activities have been superimposed over the spatial indicator layers, thus allowing users to easily locate areas subjected to intense extraction activities and to evaluate compliance to sustainability plans and environmental management prescriptions issued by public bodies. 2D and 3D indicators are in progress to be used in prioritizing environmental controls' planning: this novel application would require a proper scoring system based on the degree of compliance to both environmental management prescriptions and performances mainly in the field of quarry and marble slurry waste management.
## Acknowledgements
The authors wish to thank the General Manager of ARPAT Quarries Special Project, dott. [PERSON], and their colleague dott. [PERSON] for his precious contributions on MCW estimation from production data.
Archived 2018 images and new Airbus Pleiades tri-stereo acquisition over Carrara ex-tractive basin have been granted from the European Space Agency (ESA) following Project Proposal id 61779 - \"Quary activity monitoring in Apuan Alps\".
Permission to use 2017 LiDAR survey data have been granted by Soil Defense and Civil Protection Regional Directorate, while LAMMA Consortium (project manager of the LiDAR Survey for the Directorate) shared terrain models' data with the authors.
Production data and quarries' localization have been granted from the Regional Query Planning and Control Division.
Geoscopic OGC WMS Services are built and maintained by the Regional Environmental and Land Information System (SITA).
## References
* [PERSON] et al. (2013) [PERSON], [PERSON], & [PERSON], 2013. Generation and quality assessment of stereo-extracted DSM from GeoEye-1 and WorldView-2 imagery. _IEEE Transactions on Geoscience and Remote Sensing_, 52(2), 1259-1271.
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON] [PERSON], 2018. The AMES Stereo Pipeline: NASA's open source software for deriving and processing terrain data, Earth and Space Science, 5, 537-548.
* 3 rd Symposium on Urban Mining and Circular Economy 23-25.
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] 2018. Energy and environmental analysis of marble productive sites: \"by phases\" and \"by single process\" combined approach. _Energy PProcedia_, Vol. 148, 1183-1190.
* [PERSON] et al. (2005) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], R., 2005. Batched Multi Triangulation. _Proceedings of IEEE Conference on Visualization_. 207-215.
* [PERSON] et al. (2004) [PERSON], [PERSON], [PERSON], F., [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2004. Adaptive TetraPuzzles: Efficient Out-of-Core Construction and Visualization of Gigantic Multiresolution Polygonal Models. ACM Transaction on Graphics, vol. 23 (3) 796-803.
* [PERSON] et al. (2003) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], R., 2003. BDM: Batched Dynamic Adaptive Meshes for High Performance Terrain Visualization. Computer Graphics Forum, vol. 22 (3) 505-514.
* [PERSON] et al. (2011) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2011. Environmental impact of quarries on natural resources in Lebanon. _Land degradation & development_, 22(3), 345-358.
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON]. 2014. An automatic and modular stereo pipeline for pushbroom images. _ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Science_, Vol. II-3, 49-56.
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2014. On Stereo-Rectification of Pushbroom Images. 2014 _IEEE International Conference on Image Processing (ICIP)_ 5447-5451.
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2014. Automatic sensor orientation refinement of Pleiades stereo images. 2014 _IEEE Geoscience and Remote Sensing Symposium_, 1639-1642.
* [PERSON] et al. (2007) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2007. Classification and extraction of spatial features in urban areas using high-resolution multispectral imagery. _IEEE Geoscience and Remote Sensing Letters_, 2007, vol. 4, no 2, 260-264.
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON] [PERSON], [PERSON], & [PERSON] (2021). From Remote Sensing to Decision Support System for Industrial Quarry Basins. In _Italian Conference on Geomatics and Geospatial Technologies_ 385-404. [PERSON].
* [PERSON] (2021) [PERSON], 2021. Airborne and Spaceborne Remote Sensing Data Integration for 3D Monitoring of Complex Quarry Environments. In _Italian Conference on Geomatics and Geospatial Technologies_. 405-422. [PERSON].
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], & [PERSON] (2019). Marble Slurry's impact on groundwater: The case study of the Apuan Alps karst aquifers. _Water_, 11(12), 2462.
* [PERSON] and [PERSON] (2012) [PERSON], [PERSON] [PERSON], 2012. Digital surface modelling and 3D information extraction from spaceborne very high resolution stereo pairs. JRC Scientific and Technical Reports, Ispra.
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON], & [PERSON], 2015. Radiometric and geometric evaluation of GeoEye-1, WorldView-2 and Pleiades-1A stereo images for 3D information extraction. _ISPRS Journal of Photogrammetry and Remote Sensing_, 100, 35-47.
* [PERSON] and [PERSON] (2016) [PERSON], [PERSON], [PERSON], 2016. Multiresolution and fast decompression for optimal web-based rendering. _Graphical Models_, Volume 88,1-11.
* [PERSON] and [PERSON] (2015) [PERSON], [PERSON], [PERSON], 2015. Fast decompression for web-based view-dependent 3D rendering. _Proceedings of the 20\({}^{\text{th}}\) International Conference on 3D Web Technology (Web3D)_ 199-207.
* [PERSON] et al. (2017)* [19][PERSON], 2008. Multiresolution structures for interactive visualization of very large 3D datasets Dynamic Geometries. Phd Thesis, Faculty of Mathematics/Computer Science and Mechanical Engineering, Clausthall University of Technology.
* [20][PERSON], [PERSON], [PERSON], 2008. Interactive rendering of Dynamic Geometry (2008). _IEEE Transaction on Visualization and Computer Graphics_, Volume 14, Number 4, 914-925.
* [21][PERSON], 2018. Exploring Effective Publishing in the 3D World. PhD thesis, Department of Computer Science, University of Pisa.
* [22][PERSON] [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], [PERSON], [PERSON] [PERSON]. (2015). 3 DHOF: 3D Heritage Online Presenter. _Computers and Graphics_, Volume 52, 129-141.
* [23]QGIS Development Team, 2021. QGIS Geographic Information System. Open Source Geospatial Foundation. [[http://qgis.org](http://qgis.org)]([http://qgis.org](http://qgis.org)) (21 October 2021).
* The Case of Lebanon and Tunisia, The World Bank Environment Department, _Environmental Economics Series_, Paper, 97.
* [25][PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2016. An automated, open-source pipeline for mass production of digital elevation models (DEMs) from very high-resolution commercial stereo satellite imagery. _ISPRS Journal of Photogrammetry and Remote Sensing_, 166, 101-117.
* [26][PERSON], [PERSON], [PERSON], 2011. Life cycle assessment of building materials: Comparative analysis of energy and environmental impacts and evaluation f the eco-efficiency improvement potential. _Fuel and Energy Abstracts_, Vol. 46, 1133-1140.
* [27][PERSON], [PERSON], 2018. Index system to evaluate the quarries ecological restoration. _Sustainability_, vol. 10, 619-629.
|
isprs
|
2D/3D SOIL CONSUMPTION TRACKING IN A MARBLE QUARRY DISTRICT
|
C. Licciardello, A. Di Marco, S. Biagini, D. Palazzuoli, K. Tayeh
|
https://doi.org/10.5194/isprs-archives-xlviii-4-w1-2022-259-2022
| 2,022
|
CC-BY
|
isprs/a8036804_579b_440f_b187_c36d6d379c54.md
|
Monitoring Post-Disaster Monfrove Forest Recoveries in Lawaan-Balangiga, Eastern Samar using Time Series Analysis of Moisture and Vegetation Indices
[PERSON]
1 Department of Geodetic Engineering, College of Engineering, University of the Philippines Diliman - (kvticman2, kecabello, magermetil, dmburgos)@up.edu.ph
[PERSON]
2
[PERSON]
1 Department of Geodetic Engineering, College of Engineering, University of the Philippines Diliman - (kvticman2, kecabello, magermetil, dmburgos)@up.edu.ph
[PERSON]. [PERSON]
1 Department of Geodetic Engineering, College of Engineering, University of the Philippines Diliman - (kvticman2, kecabello, magermetil, dmburgos)@up.edu.ph
[PERSON]
1 Department of Geodetic Engineering, College of Engineering, University of the Philippines Diliman - (kvticman2, kecabello, magermetil, dmburgos)@up.edu.ph
[PERSON]
1 Department of Geodetic Engineering, College of Engineering, University of the Philippines Diliman - (kvticman2, kecabello, magermetil, dmburgos)@up.edu.ph
###### Abstract
The mangrove forests of Lawaan-Balangiga in Eastern Samar lost significant cover due to the Typhoon Haiyan that struck the region in 2013. The mangroves in the area have since shown signs of recovery in terms of growth and spatial coverage, but these widely varied with locations. This study aims to further examine the status of recovery of mangroves across different locations by analysing the time series trends of selected vegetation and moisture indices: Normalized Difference Vegetation Index (NDVI), Enhanced Vegetation Index (EVI), Modified Soil Adjusted Vegetation Index (MAAVI), and Normalized Difference Moisture Index (NDMI). These indices were extracted from Landsat 8 surface reflectance images, spanning 2014 to 2020, using Google Earth Engine (GEE). The time series analyses showed similar NDVI, MSAVI and NDMI values and trends after the 2013 typhoon event. The trend slopes also indicated high correlation (0.91 - 1.00) between and among the indices, with NDVI having the highest correlation with MSAVI (\(-1.00\)). The study was able to corroborate the previous study on mangroves in Lawaan-Balangiga, by presenting positive trend results in the identified recovered areas. These trends, however, would still have to be validated by collecting and comparing biophysical parameters in the field. The next step of the research would be to identify the factors that contribute to the varying rates of recovery in the areas and to evaluate how this can affect the carbon sequestration rates of recovering mangroves.
2021 17 19 November 2021, virtual meeting 10.5194/sprs-archives-XLV/1-4/W6-2021 2021 1
Mangroves; Landsat 8; time series; NDVI; EVI; MSAVI; NDMI.
## 1 Introduction
### The ecosystem services of mangroves
Mangroves are a group of trees and shrubs that grow in the coastal intertidal zone, making up for one of the most productive ecosystems in the world (IUCN, 2021; NOAA, 2021). They are vulnerable environmental resources that provide significant economic goods and services that contribute to human well-being (Conservation International, 2008). They support fisheries, are valued sources of timber and fuel, and provide tourism opportunities. They even contribute to climate regulation through carbon sequestration ([PERSON], 2018). More importantly, mangroves are effective in providing coastal protection to communities, through its aerial roots that trap and retain sediments, preventing erosion; its roots, trunks and can significantly reduce the force of wind, waves and flood waters ([PERSON] et al., 2017).
The protective services of mangroves are especially valuable in disaster-prone areas such as the Philippines. When super typhoon Haiyan struck the country in 2013, it was packing winds registered over 300 kph--one of the strongest in history for the landfall of a cyclone (FAO, 2021). It made several landfalls along the Visayas group of islands. The provinces of Eastern Samar and Leyte took the brunt of the category 5 storm, bringing sustained winds of up to 245 kph and an even more destructive storm surge. The super typhoon affected more than 14 million people and cost an estimated damage at US 5.8 billion across the country (World Vision, 2021). However, several coastal villages in the Eastern Samar and Leyte with substantial mangrove cover suffered significantly less damage from the storm surge compared to the bare and open coastal communities ([PERSON] et al., 2015; [PERSON] et al., 2017). Even local narratives ([PERSON] et al., 2015; The World, 2021) recognize the critical role that mangroves have in the protection of the community.
### Post-disaster mangrove recovery
The extent of damage to mangrove forests nationwide was estimated to be around 8,568 hectares, or 3.5% of the total mangrove forest area in the Philippines. Most of the damaged mangroves were identified in the provinces of Eastern Samar, Western Samar, and Leyte. Nevertheless, in as early as 18 months after the typhoon, mangrove forests have shown signs of recovery ([PERSON] et al., 2016). An assessment on the post typhoon recovery in the Lawaan-Balangiga areas in Eastern Samar ([PERSON] et al., 2021), showed growth trends and increased spatial cover.
[PERSON] et. al. (2021) assessed the damage to the mangroves and the subsequent recovery through a time series analysis of mean Normalized Difference Vegetation Index (NDVI) values in the coastal towns. Landsat 8 images, covering years from 2013 to 2019, were processed in Google Earth Engine (GEE). Trends in the NDVI values were divided into three different sections, namely (A) Damage Period (2013-2014), (B) Recovering Period (2014-2017), and (c) Stabilizing Period (2017-2019). NDVI values in the Recovery Period indicated a rapid and steady growth of mangroves. Areas with sustained recovery as opposed to delayed recovery and no recovery were also identified. The study employed a wide scale analysis of NDVI trends covering the two municipalities and several coastal barnags to arrive at a mean growth trend. It was later suggested that there may be site specific differences in the recovery of mangroves.
### Vegetation and Moisture Indices
The NDVI is an effective vegetation index commonly used in mapping mangroves as it can estimate canopy cover and forest health ([PERSON] et al., 2016; [PERSON] et al., 2017). NDVI calculates the ratio between the red and near infrared values (see Table 1, Equation 1) to quantify the greenness of the vegetation, allowing for understanding its density and health (U.S. Geological Survey, 2021).
However, other indices, such as Enhanced Vegetation Index (EVI) and Modified Soil Adjusted Vegetation Index (MSAVI) are more robust than the NDVI and can also monitor plant health and identify mangroves ([PERSON] et al., 2019). The EVI, like the NDVI, also measures the greenness of the vegetation, but is more sensitive to dense vegetation. It also accounts for canopy background noise and some atmospheric conditions (U.S. Geological Survey, 2021). It is calculated as a ratio between the red and infrared bands but includes a canopy background index (L = 1), coefficients for atmospheric resistance (C1 = 6 and C2 = 7.5) and the blue band (see Table 1, Equation 2). The MSAVI, on the other hand, corrects for the effect of the bare soil that can interfere with the vegetation signal. Its function also makes use of the red and infrared bands (see Table 1, Equation 3).
Another useful index in characterizing mangrove intactness or degradation is the Normalized Difference Moisture Index (NDMI) ([PERSON] et al., 2021). It is calculated as a ratio between the near-infrared (NIR) and short-wave infrared (SWIR) bands (see Table 1, Equation 4) and can be applied to determine vegetation water content (U.S. Geological Survey, 2021).
A recent study on the mangrove forest degradation and regeneration in the eastern coast of the Red Sea ([PERSON] et al., 2021) made use of a time series analysis of the moisture and vegetation indices to monitor mangrove health. The MSAVI and NDMI performed best in identifying vegetation trend patterns and forest disturbance and recovery in terms of water stress, respectively.
This study takes off from [PERSON] et. al. (2021) and aims to further examine the recovery of mangroves by analyzing the time series trends of the different vegetation and moisture indices to characterize the varying rates of growth from different locations. This study offers a finer scale analysis of mangrove recovery in terms of the spatial resolution and indicators of recovery.
Monitoring post-typhoon mangrove recovery can provide further insights on how events such as typhoons and storm surges affect the ecosystem dynamics of mangroves and aids in the formulation of intervention measures that consequently increase a community's resilience to the effects of climate change.
## 2 Methodology
### Study area
The municipalities of Lawaan and Balangiga are in the southern portion of the province of Eastern Samar. It has a coastline length of approximately 36 km. The mangrove patches studied were the recovered areas identified by [PERSON] et. al. (2021). Only the mangrove patches with area greater than 900 sqm, which cover more than 1 pixel in Landsat-8 image, were selected to reduce the probability that these zones were misclassified. A total of thirty-nine (39) data points scattered across the Lawaan-Balangiga coastal barangys were considered (**Figure 1**).
### Satellite Image Data
Atmospherically corrected surface reflectance products from Landsat 8 images taken from 2013 to 2020 were used for the study. NDVI, EVI, MSAVI, and NDMI were calculated and added as bands to each image which were then masked for clouds. A time series of each index was generated for each of the 39 data points. The image data were accessed and processed using GEE.
### Time Series Analysis of Vegetation Indices
The NDVI, EVI, MSAVI, and NDMI time series during the recovery period were downloaded and analyzed based on the slopes of the trend lines computed using linear regression in Excel. The expected range of values for NDVI, MSAVI, and NDMI are between -1 to 1. Positive values for the NDVI and MSAVI identify vegetation, and values closer to 1 indicate health and high vegetation activity. Positive values of EVI also indicate vegetation. Positive NDMI values mean low water stress in the mangrove area suggesting health and intactness. Increasing NDVI, EVI, and MSAVI were interpreted as increasing canopy cover. The positive trend in the NDMI was interpreted as increasing water content, all indicative of mangrove health and recovery. The variation in the trendline slopes was taken to indicate the different rates of recovery across the data points and their locations. The correlation between the slopes of each index was calculated. A high correlation between indices, with an R-value close to 1, was interpreted to support the general greening pattern across the study area.
### Trend Analysis
A Mann Kendall trend test, commonly used to analyze data collected over time, was also done to confirm the increasing or decreasing trends in the data. In the Mann-Kendall test, the null hypothesis that assumes that there is no trend in the series was tested against an alternative hypothesis that indicates that there exists a significant trend in the series. The test was conducted using XLSTAT software, which solves the Mann-Kendall S Statistic and the p-value for every time series. When the value of S is high and the p-value is less than the significance level \(a\) (alpha) = 0.05, the null hypothesis is rejected, indicating a trend in the time series.
\begin{table}
\begin{tabular}{l c} \hline \hline
**Spectral** & **Formula** \\
**Index** & \\ \hline
**NDVI** &
\begin{tabular}{c} NIR \(-\) RED \\ NIR \(+\) RED \\ \end{tabular} (1) \\
**EVI** &
\begin{tabular}{c} \(2.5*\) \\ \end{tabular} (\(\text{NIR}+6*\text{RED}-7.5*\text{BLUE})+1\) \\
**MSAVI** &
\begin{tabular}{c} \(2*\text{NIR}+1-\sqrt{(2*\text{NIR}+1)^{2}-8*(\text{NIR}-\text{RED})}\) \\ \end{tabular} (3) \\
**NDMI** &
\begin{tabular}{c} NIR \(-\) SWIR1 \\ NIR \(+\) SWIR1 \\ \end{tabular} (4) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Spectral Indices for time series analysis
## 3 Results and Discussion
Table 2 shows the mean values of the indices--from the time following the typhoon in 2014 to after period of recovery in 2020. The positive values and increase in the mean show there is growth in vegetation for the recovery points.
The time series plots of the indices show an upward trend in the values for most of the points in the study area (**Figure 2**). The slope of trend lines for NDVI, MSAVI and NDMI are similar, while the slope of the EVI trendline is steeper. The positive slope values support the interpretation of growth in vegetation.
The slopes of the trendlines for indices were plotted for the damage and recovery period (**Figure 3** and **Figure 4**). The horizontal axis for the slope of the trend lines corresponds to the data points. These said points were arranged according to their locations along the coastline from west to east.
The negative values in the plot for the damage period show a general decrease in the vegetation and moisture index values, indicating mangrove canopy loss. The slopes of the trendlines show positive values during the recovery period indicating that across the indices, there was a general increase in value and recovery in mangrove areas.
The plots also show the variation in the damages and the rates of recovery. Points located on the eastern portion took the most damage, as evidenced by the significant negative slopes in 2013 to 2014. However, these points also recorded a faster rate of recovery based on the steeper slopes in the 2014-2020 plot.
It appears that mangrove areas that experienced more damages during the typhoon event have recovered faster. It may be assumed that the mangrove cover loss made way for new growth and colonization.
There is also a high correlation among the trend slopes of the EVI, MSAVI, NDVI, and NDMI (**Table 3**).
The NDVI has the highest correlation with MSAVI (1.00) since both make use of the NIR and RED bands in highlighting vegetation. The high correlation of NDVI and NDMI (r = 0.94) also indicates low to mid-canopy cover with low water stress. The higher the trend slope values observed among clusters of points indicate that at different areas, there are varying rates of recovery.
\begin{table}
\begin{tabular}{c c c} \hline \hline & **2014** & **2020** \\ \hline
**EVI** & 0.66 & 0.79 \\
**MSAVI** & 0.22 & 0.29 \\
**NDMI** & 0.11 & 0.20 \\
**NDVI** & 0.17 & 0.23 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Mean values of indices in 2014 and 2020
Figure 1: Mangrave recovery points across the study area from mangrove change map (2013-2019) (modified from [PERSON] et. al. 2021)
\begin{table}
\begin{tabular}{c c c c} \hline \hline & **EVI** & **MSAVI** & **NDMI** & **NDVI** \\ \hline
**EVI** & 1.00 & & \\
**MSAVI** & 0.96 & 1.00 & & \\
**NDMI** & 0.91 & 0.93 & 1.00 & \\
**NDVI** & 0.96 & 1.00 & 0.94 & 1.00 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Correlation of the slopes of trendlines of the vegetation and moisture indices with p-values \(<\) 0.01 Figure 4: Slopes of Indices for all data points (2014-2020)
Figure 3: Slopes of Indices for all data points (2013-2014)
Figure 2: Sample Time Series plot of Indices for one point
Figure 5: Estimated relative rates of mangrove “recovery” across the study area based on the slope of NDVI trend lines
Figure 6: Estimated relative rates of mangrove “recovery” across the study area based on the slope of NDMI trend lines
Results of the Mann Kendall trend test revealed that 34 of the 39 data points (87%) have p-values less than 0.05 for all the indices, showing that there was a significant positive trend across the time series of the different indices. This further supports the initial interpretation that for most of the data points, there is evident mangrove recovery.
Mapping the slopes of the NDVI and NDMI trend lines show similar recovery rates among clusters of data (**Figure 5** and **Figure 6**). From this, probable factors that contribute to the mangrove recovery can now be explored, including the effect of the location of the mangrove with respect to the coastline and river mouths. Proximity to urban areas and the adjacent vegetation and reef flat could also be possible explanatory variables.
## 4 Conclusions and Recommendations
The research was able to corroborate the previous study on mangroves in Lawaan-Balangiga, by presenting positive trend results in the identified recovered areas. It was also able to contribute to the literature by including time series trend analysis of other vegetation and moisture indices as indicators of post-typhoon growth and recovery. These indices, including their combination, can be explored to detect post-typhoon recoveries further.
Further statistical analysis is needed to characterize the varying rates of recovery for the different data clusters. Additional time series analysis for the areas identified as retained and lost are also recommended to compare the growth rates of the mangroves in these areas with those of the recovered mangroves.
These trends, however, would still have to be validated by collecting and comparing biophysical parameters in the field.
The next step of the research would be to identify the factors that contribute to the varying rates of recovery in the areas and to evaluate how this can affect the carbon sequestration rates of recovering mangroves.
## Acknowledgements
This research was made possible by the support of the 'Upgrading and Promoting the Comprehensive Assessment and Conservation of Blue Carbon Ecosystems and their Services in the Coral Triangle\" (UP _Blue_CARES) Project and the \"The Project for Comprehensive Assessment and Conservation of Blue Carbon Ecosystems and their Services in the Coral Triangle (BlueCARES) funded by Japan International Cooperation Agency (JICA) and Japan Science and Technology Agency (JST) under the SATREPS Program.
## References
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (2016). Temporal changes of NDVI for qualitative environmental assessment of mangroves: Shrimp farming impact on the health decline of the arid mangroves in the Gulf of California (1990-2010). _Journal of Arid Environments_, _125_, 98-109. [[https://doi.org/10.1016/J.JARIDENV.2015.10.010](https://doi.org/10.1016/J.JARIDENV.2015.10.010)]([https://doi.org/10.1016/J.JARIDENV.2015.10.010](https://doi.org/10.1016/J.JARIDENV.2015.10.010))
* [PERSON] et al. (2021) [PERSON], [PERSON], & [PERSON] (2021). Monitoring Mangrove Forest Degradation and Regeneration: Landsat Time Series Analysis of Moisture and Vegetation Indices at Rabigh Lagoon, Red Sea. _Forests 2021, Vol. 12, Page 52, 12_(1), 52. [[https://doi.org/10.3390/F12010052](https://doi.org/10.3390/F12010052)]([https://doi.org/10.3390/F12010052](https://doi.org/10.3390/F12010052))
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (2020). Development and application of a new mangrove vegetation index (MVI) for rapid and accurate mangrove mapping. _ISPRS Journal of Photogrammetry and Remote Sensing_, _166_, 95-117. [[https://doi.org/10.1016/J.ISPRSJPRS.2020.06.001](https://doi.org/10.1016/J.ISPRSJPRS.2020.06.001)]([https://doi.org/10.1016/J.ISPRSJPRS.2020.06.001](https://doi.org/10.1016/J.ISPRSJPRS.2020.06.001))
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (2021). POST-DISASTER ASSESEMENT OF MANGROVE FOREST RECOVERY IN LAWAAN-BALANGIA, EASTEN SAMA USING NDVI TIME SERIES ANALYSIS. _ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, V-3-2021_, 243-250. [[https://doi.org/10.5194/ISPRS-ANNAALS-V-3-2021-243-2021](https://doi.org/10.5194/ISPRS-ANNAALS-V-3-2021-243-2021)]([https://doi.org/10.5194/ISPRS-ANNAALS-V-3-2021-243-2021](https://doi.org/10.5194/ISPRS-ANNAALS-V-3-2021-243-2021))
* Conservation International (2008) Conservation International. (2008). _Economic Values of Coral Reefs, Mangroves, and Seagrasses A Global Compilation 2008_. www.communities.coastalvalues.org/
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (2015). Perceptions of Typhoon Havian-affected communities about the resilience and storm protection function of mangrove ecosystems in Leyte and Eastern Samar, Philippines. _Climate, Disaster and Development Journal_, _1_(1). [[https://doi.org/10.18783/cdfj.v001.i01.a03](https://doi.org/10.18783/cdfj.v001.i01.a03)]([https://doi.org/10.18783/cdfj.v001.i01.a03](https://doi.org/10.18783/cdfj.v001.i01.a03))
* FAO (2021) FAO (2021). _FAO and Typhoon Havian in the Philippines : FAO in Emergencies_. [[https://www.fao.org/emergencies/crisis/philippines-typhoon-haviyan/intro/en/](https://www.fao.org/emergencies/crisis/philippines-typhoon-haviyan/intro/en/)]([https://www.fao.org/emergencies/crisis/philippines-typhoon-haviyan/intro/en/](https://www.fao.org/emergencies/crisis/philippines-typhoon-haviyan/intro/en/))
* IUCN (2021) IUCN. (2021). _Mangroves and coastal ecosystems_ / _IUCN_. [[https://www.ucn.org/theme/marine-and-polar/our-work/climate-change-and-ocean/magnroves-and-coastal-cosystems](https://www.ucn.org/theme/marine-and-polar/our-work/climate-change-and-ocean/magnroves-and-coastal-cosystems)]([https://www.ucn.org/theme/marine-and-polar/our-work/climate-change-and-ocean/magnroves-and-coastal-cosystems](https://www.ucn.org/theme/marine-and-polar/our-work/climate-change-and-ocean/magnroves-and-coastal-cosystems))
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (2021). Mapping the Extent of Mangrove Ecosystem Degradation by Integrating an Ecological Conceptual Model with Satellite Data. _Remote Sensing 2021, Vol. 13, Page 2047_, _13_(11), 2047. [[https://doi.org/10.3390/RS13112047](https://doi.org/10.3390/RS13112047)]([https://doi.org/10.3390/RS13112047](https://doi.org/10.3390/RS13112047))
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON], & [PERSON] (2016). Damage and recovery assessment of the Philippines' mangroves following Super Typhoon Havian. _Marine Pollution Bulletin_, _109_(2). [[https://doi.org/10.1016/J.MARPOLBL.2016.06.080](https://doi.org/10.1016/J.MARPOLBL.2016.06.080)]([https://doi.org/10.1016/J.MARPOLBL.2016.06.080](https://doi.org/10.1016/J.MARPOLBL.2016.06.080))
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (2017). _Valuation of the Coastal Protection Services of Mangroves in the Philippines_. www.wavespartnership.org
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (2017). Mangrove dieback during fluctuating sea levels. _Scientific Reports_, _7_(1). [[https://doi.org/10.1038/541598-017-01927-6](https://doi.org/10.1038/541598-017-01927-6)]([https://doi.org/10.1038/541598-017-01927-6](https://doi.org/10.1038/541598-017-01927-6))
* NOAA (2021) NOAA. (2021). _What is a mangrove forest?_* [[PERSON] and [PERSON]] [PERSON], & [PERSON] (2018). The Economic Valuation of Mangrove Forest Ecosystem Services: Implications for Protected Area Conservation. _The George Wright Forum_ *, _35_(3), 341.
* [[PERSON] et al.2017] [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] (2017). _Valuing the Protection Service Provided by Mangores in Typhoon-hit Areas in the Philippines_. www.epsea.org.
* [The World2021] The World. (2021). _Philipping town credits preserved mangroves with stopping Typhoon Halyan's storm surge_. [[https://www.pri.org/stories/2013-11-29/saved-mangroves-philipping-town-dodges-havians-storm-surge](https://www.pri.org/stories/2013-11-29/saved-mangroves-philipping-town-dodges-havians-storm-surge)]([https://www.pri.org/stories/2013-11-29/saved-mangroves-philipping-town-dodges-havians-storm-surge](https://www.pri.org/stories/2013-11-29/saved-mangroves-philipping-town-dodges-havians-storm-surge))
* [U.S. Geological Survey2021a] U.S. Geological Survey. (2021a). _Landsat Enhanced Vegetation Index_. [[https://www.usgs.gov/core-science-systems/nli/landsat/landast-enhanced-vegetation-index?qt-science_support_page_related_con=0#qt-science_support_page_related_con](https://www.usgs.gov/core-science-systems/nli/landsat/landast-enhanced-vegetation-index?qt-science_support_page_related_con=0#qt-science_support_page_related_con)]([https://www.usgs.gov/core-science-systems/nli/landsat/landast-enhanced-vegetation-index?qt-science_support_page_related_con=0#qt-science_support_page_related_con](https://www.usgs.gov/core-science-systems/nli/landsat/landast-enhanced-vegetation-index?qt-science_support_page_related_con=0#qt-science_support_page_related_con))
* [U.S. Geological Survey2021b] U.S. Geological Survey. (2021b). _Landsat Normalized Difference Vegetation Index_. [[https://www.usgs.gov/core-science-systems/nli/landsat/landast-normalized-difference-vegetation-index?qt-science_support_page_related_con=0#qt-science_support_page_related_con](https://www.usgs.gov/core-science-systems/nli/landsat/landast-normalized-difference-vegetation-index?qt-science_support_page_related_con=0#qt-science_support_page_related_con)]([https://www.usgs.gov/core-science-systems/nli/landsat/landast-normalized-difference-vegetation-index?qt-science_support_page_related_con=0#qt-science_support_page_related_con](https://www.usgs.gov/core-science-systems/nli/landsat/landast-normalized-difference-vegetation-index?qt-science_support_page_related_con=0#qt-science_support_page_related_con))
* [U.S. Geological Survey2021c] U.S. Geological Survey. (2021c). _Normalized Difference Moisture Index_. [[https://www.usgs.gov/core-science-systems/nli/landsat/normalized-difference-moisture-index](https://www.usgs.gov/core-science-systems/nli/landsat/normalized-difference-moisture-index)]([https://www.usgs.gov/core-science-systems/nli/landsat/normalized-difference-moisture-index](https://www.usgs.gov/core-science-systems/nli/landsat/normalized-difference-moisture-index))
* [World Vision2021] World Vision. (2021). _2013 Typhoon Haiyan: Facts, FAQs, and how to help / World Vision_. [[https://www.worldvision.org/disaster-relief-news-stories/2013-typhoon-haiyan-facts/where-hit](https://www.worldvision.org/disaster-relief-news-stories/2013-typhoon-haiyan-facts/where-hit)]([https://www.worldvision.org/disaster-relief-news-stories/2013-typhoon-haiyan-facts/where-hit](https://www.worldvision.org/disaster-relief-news-stories/2013-typhoon-haiyan-facts/where-hit))
* [[PERSON] et al.2019] [PERSON], [PERSON], [PERSON], & [PERSON] (2019). The effects of water depth on estimating Fractional Vegetation Cover in mangrove forests. _International Journal of Applied Earth Observation and Geoinformation_, _83_, 101924. [[https://doi.org/10.1016/J.JAG.2019.101924](https://doi.org/10.1016/J.JAG.2019.101924)]([https://doi.org/10.1016/J.JAG.2019.101924](https://doi.org/10.1016/J.JAG.2019.101924))
|
isprs
|
MONITORING POST-DISASTER MANGROVE FOREST RECOVERIES IN LAWAAN-BALANGIGA, EASTERN SAMAR USING TIME SERIES ANALYSIS OF MOISTURE AND VEGETATION INDICES
|
K. V. Ticman, S. G. Salmo III, K. E. Cabello, M. Q. Germentil, D. M. Burgos, A. C. Blanco
|
https://doi.org/10.5194/isprs-archives-xlvi-4-w6-2021-295-2021
| 2,021
|
CC-BY
|
isprs/94a585bf_de4f_4475_b48e_123288205f9e.md
|
# A CapsNets approach to pavement crack detection using mobile laser scanning point clouds
[PERSON]
1 Department of Geography and Environmental Management, University of Waterloo, Waterloo, ON N2L 3G1, Canada - w78 [EMAIL_ADDRESS]; [EMAIL_ADDRESS];[EMAIL_ADDRESS]
[PERSON]
1 Department of Geography and Environmental Management, University of Waterloo, Waterloo, ON N2L 3G1, Canada - w78 [EMAIL_ADDRESS]; [EMAIL_ADDRESS];[EMAIL_ADDRESS]
[PERSON]
1 Department of Geography and Environmental Management, University of Waterloo, Waterloo, ON N2L 3G1, Canada - w78 [EMAIL_ADDRESS]; [EMAIL_ADDRESS];[EMAIL_ADDRESS]
[PERSON]
3 Department of Systems Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada - [EMAIL_ADDRESS];
[PERSON]
4 Department of Civil Engineering, Ryerson University, Toronto, ON M5B 2K3, Canada - [EMAIL_ADDRESS].
[PERSON]
4 Department of Civil Engineering, Ryerson University, Toronto, ON M5B 2K3, Canada - [EMAIL_ADDRESS].
###### Abstract
Routine pavement inspection is crucial to keep roads safe and reduce traffic accidents. However, traditional practices in pavement inspection are labour-intensive and time-consuming. Mobile laser scanning (MLS) has proven a rapid way for collecting a large number of highly dense point clouds covering roadway surfaces. Handling a huge amount of unstructured point clouds is still a very challenging task. In this paper, we propose an effective approach for pavement crack detection using MLS point clouds. Road surface points are first converted into intensity images to improve processing efficiency. Then, a Capsule Neural Network (CapsNet) is developed to classify the road points for pavement crack detection. Quantitative evaluation results showed that our method achieved the recall, precision, and F\({}_{1}\)-score of 95.3%, 81.1%, and 88.2% in the testing scene, respectively, which demonstrated the proposed CapsNet framework can accurately and robustly detect pavement cracks in complex urban road environments.
Footnote †: Corresponding Author
Footnote †: Corresponding Author
## 1 Introduction
### Motivation
Pavement cracks are common damages on pavement surfaces, which are signed for potential damages in the supporting structures ([PERSON], 1991). Road surface defects may cause severe troubles in traffic, such as congestion, delay, and even safety problems. [PERSON] et al. (2018) indicated that there would be an increasing demand for Pavement Maintenance and Rehabilitation programs worldwide. Road cracks tend to deteriorate due to environmental factors ([PERSON] et al., 2004). If the cracks are not sealed in time, water infiltration into the lower layer, especially during the period of snowmelt freezing-thawing mix, will exacerbate the damage of the road and lead to the formation of network cracks. Therefore, it is important to prevent and repair early cracks in the pavement. However, traditional road crack detection usually relies on human inspection, limiting the accuracy and efficiency of the measurement ([PERSON] et al., 2019). Most of the common practice in the road is usually time-consuming, dangerous, labour-intensive, and subjective. Thus, it is a trend to replace traditional crack detection methods with automated or semi-automated ones. Semi-automated methods combine human intervention and machine, while automated methods require minimal human assistance. Automated and semi-automated technologies make it possible to develop real-time pavement distress detection.
Mobile laser scanning (MLS) provides high-density data by close-range acquisition, which ensures even the smallest of features are captured in the resulting point clouds. The densities of such point clouds vary significantly depending on several factors, including the driving speed during acquisition, distances from the laser beams to the surfaces reflecting the laser's energy and laser repetition rate. It is common that densities are measured in the hundreds or even thousands of points per square meter. Moreover, comparing with traditional image data, MLS data can provide highly accurate spatial information ([PERSON] et al., 2020). The laser scanners used in the high-end MLS systems are typically accurate to a few millimeters. Positioning accuracy of 1 to 2 centimeters is possible with careful planning, quality hardware, favorable GPS conditions, and supplemental ground control. Additionally, MLS systems enable the mobile data collecting of the roads and constructions and provide affordable 3D databases for GIS analysis ([PERSON] et al., 2019). To be specific in the crack detection, instead of the color differences in the RGB images, the intensity differences of the generated 2D images of MLS present the crack clearly.
### Objectives of the study
This paper aims to propose an efficient deep learning-based framework to provide pavement crack feature detection, which can be used to maintain and manage road construction. The data used in this paper is 3D point cloud data obtained from an MLS system. The main objectives of this study are as follows: (1) Applying Capsule Neural Network (CapsNet) to pavement inspection and (2) Analyzing the advantages and disadvantages of applying the CapsNet to pavement crack detection.
### Related Work
#### 1.3.1 Rule-Based Methods: Rule-based crack detection
We use of the earliest functions presented for the semi-automated pavement crack ([PERSON] et al., 2020). [PERSON] et al. (1994) proposed a rule-based system containing facts and variable rules created by prominent features of different types of distress. In addition, the major processes were gathering information on the input image and then deciding the efficient pattern. The results achieved accuracy at about 85% to 90%. Moreover, [PERSON] et al.
(2016) stated a rule based on the minimal path selection algorithms with a redefined artifact filtering step to estimate the thickness of the crack pattern. [PERSON] et al. (2011) proposed an approach combining a series of image processing techniques.
In this approach, the image was preprocessed to enhance the linear features, and confusing area, like joints on pavements, will be eliminated. Moreover, a seed-based approach combining multiple directional non-minimum suppression with symmetry check was proposed. In general, rule-based crack detection was easy to verify the pavement crack, as it did not require an annotation and training process ([PERSON] et al., 2020). However, in this kind of methods, most of the features were created artificially in some original datasets, in which not all the variation in real-life images could be considered, especially illumination changes or irregular shape of cracks. Thus, these rule-based methods could not work well in changeable situations.
#### 1.3.2 Learning-based methods
As stated above, the rule-based methods for pavement crack could be improved into more efficient and accurate ways. The deep learning-based algorithms were studied by many researchers in the last decade. Tensor voting included two major steps, representing data using tensor calculus, and data combining by nonlinear voting, including sparse tensor voting and dense tensor voting ([PERSON] et al., 2014). In addition, the iterative tensor voting (ITV) contained three steps, preprocessing of the MLS data, GRF image generation and ITV-based crack detection, in which the third step included the core algorithm of the whole process. The ITV method achieved much more accurate results. However, this method required intensive computation ([PERSON] et al., 2014). The major processed of tensor voting and ITV were shown.
CNN had become state-of-the-art for various image analysis tasks ([PERSON] et al., 2020). A basic neural network normally consisted of three layers: an input layer, hidden layers, and an output layer ([PERSON] et al., 1999). In a typical CNN model, the hidden layer constituted a group of convolutional layers. The convolution layer would be multiplied with other layers ([PERSON] et al., 1999). The activation function was commonly a RELU layer following additional convolutions such as pooling layers, fully connected layers and normalization layers. These layers referred to hidden layers because the activation function and final convolution masked inputs and outputs.
A geometric graph CNN was based on the MLS data established by [PERSON] et al. (2019). To learn the major features from the MLS point sets, they combined the Taylor Gaussian mixture model network. This algorithm could reduce the computation cost with guaranteed segmentation performance. However, the multiobject connected area labeling was limited because of the limited receptive field for TGConv. Furthermore, real-time road crack mapping was introduced based on MLS technology ([PERSON] et al., 2019), which combined the CNN and Bayesian optimization algorithm to improve the precision and decrease processing time. As a result, they achieved over 90% accuracy with real-time images and videos.
In conclusion, there were two main types of data for crack detection, i.e., images and MLS data. The rule-based and deep learning-based methods could be used for-based crack detection ([PERSON] et al., 2020). Moreover, for MLS data-based crack detection, there were two main methods, 2D georeferenced feature (GRF) image-driven detection and 3D point-driven detection functions ([PERSON] et al., 2018).
## 2 Method
This experiment used MLS point clouds and converted them into intensity images. The generated images were then divided into training, validation, and testing datasets. Finally, the CapsNet was proposed for pavement crack detection. The CapsNet model has three stages, which are the Rectified Linear Unit (ReLU) convolution, primary capsules, and convolution capsules. Additionally, the evaluation metrics used in the experiment are recall, precision, and F\({}_{1}\)-score.
### Data
The MLS data, which was collected in April 2012 by a RIEGL VMX-450 MLS system, was smoothly integrated with two RIEGL VQ-450 scanners. The scanners' laser pulse repetition rates are up to 550 kHz ([PERSON] et al., 2014). The average speed of the traveling vehicle was 50 km. The two laser scanners were symmetrically settled on the left and right sides, which orient the rear of the vehicle at a heading angle of approximately 145\({}^{\circ}\) in the \"Butterfly\" configuration pattern, which can also be called the \"X\" configuration pattern. With this pattern, RIEGL VQ-450 can scan in a 360\({}^{\circ}\) circle owing to the motorized mirror scanning mechanism.
According to the RIEGL website (2013), the system can provide a measurement rate of 1.1 million pts/sec and a scan frequency of 400 lines/sec. The density sharply drops perpendicular to the travel lines, which means the closer to the vehicle trajectory will lead to a higher point density level. The average point density on the road is about 3,300 points/m\({}^{2}\). The dataset with 8.4 million points in the length of 105 m is selected from the whole survey. In addition, it covers a two-lane cement-paved road segment of about 3,105 m. All the MLS data used in this experiment has been pre-processed with registration operations. Figure 1 shows the generated georeferenced intensity image of the road and Figure 2 shows the labeled pavement cracks highlighted in yellow. A comparison of part of the georeferenced intensity image and classified image are shown in Figure 3.
### Pavement Crack Detection
#### 2.2.1 Overview of workflow
A basic neural network usually consists of three layers: an input layer, hidden layers, and an output layer ([PERSON], 2017). Specifically, in the CNN algorithms proposed for pavement crack, the input layer's input data are tensors, while the hidden layer contains convolutional layers, RELU layers, and pooling layers. In the CNN algorithms, the most important part is convolution, in which the input tensor data and convolution kernel are multiplied to obtain the improved results. In this paper, the proposed CapsNet uses the capsules to replace the simple layers in the traditional neuron. It mainly includes two stages, linear combination and dynamic routing ([PERSON] et al., 2018). Furthermore, the CapsNet builds the multi-dimensional squ squashing function and creates tensors by grouping multiple feature channels. CapsNet includes dynamic routing mechanisms ([PERSON] et al., 2018).
Figure 4 shows the processing workflow. The proposed method can be divided into three steps, which are road segmentation using intensity image generation, data preprocessing, and CapsNet model. The CapsNet contains four parts, they are the convolution layer, the primary capsule layer, the convolution capsule layers, and the full capsule layer.
#### 2.2.2 Road Segmentation and Intensity Image Generation
As stated above, the collected MLS data contain a large volume of 3D points with which the entire road scene is covered. In order to narrow the searching region and improve efficiency, we only focus on the processing of road surface points. The cracks are on the road surfaces, and these road surfaces can be regarded as planes. Therefore, we projected MLS point clouds onto the road surface to generate georeferenced intensity images without height information. The computational efficiency can be improved effectively.
The curb-line-based road segmentation method was adopted to separate road surface points from the entire point cloud data ([PERSON] et al., 2014). Instead of processing the discrete, unordered road surface points in 3D space, we rasterized them into a 2D georeferenced intensity image using the inverse distance weighted (IDW) interpolation method. In this experiment, road surface points were vertically partitioned into a series of grids with a specific spatial resolution. The spatial resolution was determined according to the point cloud density. Then, the grid points were interpolated into a single pixel. The gray value was determined by the distances to the grid center and intensities of these points. If a grid contains no points, the associated pixel value is set to be zero.
To divide and classify the data into suitable sizes, the cracks were marked in yellow, as shown in Figure 2. Then, it was divided into pieces using the same method as Figure 1 is divided. With the labeled dataset, the unlabeled one can be easily classified. The numbers of the images with a crack after clipping can be recorded based on the labeled one's information, and they will be selected from the labeled data set. The labelled dataset in this study is called Pavement Crack (PC) data.
The training, validation, and testing data were fed into the model in the Incremental Design Exchange (IDX) data format. The format of the PC dataset is similar to the MNIST dataset, which is a handwritten digit classification dataset used for machine-learning training. The total image size of the generated intensity image is \(377\times 3770\). Additionally, the segmented pieces were labeled as pavements and cracks. Furthermore, 1,000 pieces of segmented pavement were selected as the training data, while 316 pieces were selected as validation data, and 176 pieces were used as testing data. The training, testing, and validation data were all randomly selected. Moreover, the data ratio is 67%, 21%, 12% of the total dataset.
Figure 4: Flowchart of the proposed method.
#### 2.2.3 Proposed Model:
The first layer, called ReLU Conv1, is a convolution layer, by which the simple features can be detected and inputted to the primary capsules. In addition, there are 256 kernels, each of which is in the size of 999 ([PERSON], [PERSON], 2017). With this convolution, the parameter is in the size of K2*K2*256 ([PERSON] et al., 2017). Moreover, the second layer, called primary capsules, combines the convolutional layer features ([PERSON] et al., 2019). Each of the primary capsules converts the data with eight convolutional kernels in the size of K3*K3*8. Furthermore, with the convolution capsules and the full capsule, the 8-D vector is converted into 16-dimensional.
In the capsule layer, the transformation is shown and calculated by the following equation:
\[\begin{split}\text{S}_{\text{i}}=\sum\
olimits_{\text{r}_{\text{ i}}}\text{C}_{\text{i}}\text{d}_{\text{ij}}\end{split} \tag{1}\]
In Eq. (1), \(\text{S}_{\text{i}}\) shows the entire input to the _j-th_ capsule. In addition, the \(\text{C}_{\text{i}}\) is the weight for the connection of the _i-th_ and the _j-th_ capsules, and \(\text{d}_{\text{ij}}\) is the transformed input to the _j-th_ capsule. Moreover, \(\text{d}_{\text{ij}}\) can be calculated by:
\[\text{d}_{\text{ij}}=\text{W}_{\text{g}}\text{u}_{\text{i}} \tag{2}\]
As is mentioned above, \(\text{d}_{\text{ij}}\) is the input of the _j-th_ capsule. In addition, \(\text{W}_{\text{ij}}\) is the weights of the transformation from the _i-th_ to _j-th_ capsule, and \(\text{u}_{\text{i}}\) is the output of the _i-th_ capsule. These transformations allow learning the whole relationship, instead of detecting independent features by filtering at different scales portions of the image ([PERSON] et al., 2011).
In routing by agreement, the outputs from one capsule are routed to capsules in the next layer according to the child capsule's ability to predict the parent capsule's outputs ([PERSON] et al., 2011). The squashing function, which is a non-linearity function, is combined by an additional scaling and a unit scaling ([PERSON] et al., 2020):
\[\text{u}_{\text{j}}=\frac{\left\lVert\text{S}_{\text{i}}\right\rVert^{2}}{1+ \left\lVert\text{S}_{\text{i}}\right\rVert^{2}}\frac{\text{S}_{\text{j}}}{ \left\lVert\text{S}_{\text{j}}\right\rVert^{2}} \tag{3}\]
where \(\text{u}_{\text{j}}\) is the vector output of _j-th_ capsule, and \(\text{S}_{\text{j}}\) is its input.
#### 2.2.4 Performance Assessment:
In this experiment, the performance assessment on training and validation date will be based on the loss equation. During the training, the loss function is:
\[\begin{split}\text{L}_{\text{k}}=\text{T}_{\text{k}}\max\left(0, \text{m}^{+}-\left\lVert\text{v}_{\text{k}}\right\rVert\right)^{2}\\ +\alpha(1-\text{T}_{\text{k}})\max\left(0,\left\lVert\text{v}_{ \text{k}}\right\rVert-m^{-}\right)^{2}\end{split} \tag{4}\]
In the loss equation, k means the number of classes of the digit, and \(\alpha\) is set to 0.5 for the balance of loss. Additionally, only if \(\text{m}^{+}\)=0.9 and \(\text{m}^{-}\)=0.1, it meets \(\text{T}_{\text{k}}\)=1 ([PERSON] et al., 2018).
Moreover, using the testing data to do the evaluation is also part of the performance assessments. The main difference between the evaluation and the training is the processing dataset, as the evaluation is based on the testing data while the training is based on the other two datasets. Additionally, both of them use the capsule steps as provided above during the test.
Furthermore, the accuracy is also assessed by comparing the extracted road crack markings with the manually ground-truth. The results are quantitatively evaluating by the following three measures: recall, precision, and \(\text{F}_{1}\)score. Recall describes if the pavement crack markings are completely extracted, while precision indicates the percentage of the valid markings. The recall and precision are defined as ([PERSON] et al., 2014):
\[\begin{cases}recall=Cp/Rf\\ precision=Cp/Ep\end{cases} \tag{5}\]
where \(Cp\) means the number of pixels belonging to the actual pavement cracks, \(Rf\) shows the amount of ground-truth collected by the manual interpretation, and \(Ep\) represents the number of pixels extracted by the proposed algorithm. \(\text{F}_{1}\)score shows an overall score, which is defined as ([PERSON] et al., 2014):
\[\text{F}_{1}\text{-score}=2\times\frac{\left(recall-precision\right)}{\left( recall-precision\right)} \tag{6}\]
## 3 Results and Discussion
### Experiment Results
Figure 5 shows the overall accurate of the classification. To compare and visualize the result, several scatter diagrams with trend lines are created.
Figures 5 and 6 show the accuracy of the corrected experiment and the iteration is set as 3. To clearly present the trends of accuracy, the accuracy of the first step is not shown in Figures 5 and 6, as the accuracy of them close to 0. They both present upward trends during the training. In Figure 5, the initial accuracy is relatively dispersed and unstable, but the accuracy gradually increases with the increase of training. In Figure 6, the validation accuracy is increasing stably between 0.980 and 0.995. In addition, when the epoch is set to 50 and the iteration is set to 3, it spends about two hours training the dataset.
### Loss Value
Figure 7 presents the best loss value trend, in which the iteration is set as 3 rather than 1 or 2. It shows a relatively concentrated of data. Figures 7 and 8 show the loss value of the experiment and the trend during the training. The loss function in this experiment is used to optimize the CapsNet. The loss accuracy provides the sum of errors made for each example in training or validation sets. If the model's prediction is perfect, the loss will be zero, while the greater value indicates a worse prediction.
Figure 8 shows a trend line graph that presents the comparison of the loss value with different iterations. All three lines show the overall decreasing trend. The orange line is the trend line of loss when the iteration is equal to three. The green line shows the loss value with three times iteration, while the blue one is with two times iterations. The loss values of these three experiments are in the trend from 0.0005 to 0.0003. Comparing these three lines, the orange line keeps lower than the other two.
### Training,Validation and Test Accuracy
The training and validation accuracy is the accuracy of a model that it was constructed on. Figures 5, 6, and 9 illustrate the training and validation accuracy. Figure 9, a sample training accuracy of the overfitting model, shows a trend from 0 to 100%, as after step 530, the accuracy is 100%. As the training accuracy meets 100% in training, this shows that the model is overfitting. Overfitting refers to a model that has learned the training dataset too well, including the statistical noise or random fluctuations in the training dataset. However, the problem with overfitting is that the more specialized the model becomes to training data, the less well it can generalize to new data, resulting in an increasing generalization error. This increase in generalization error can be measured by the performance of the model on the validation dataset. In conclusion, the three main reasons for the model's overfitting are the high complexity of the model, the insufficient training data, and the big data noise. Specifically, in this experiment, the overfitting is caused by the inaccuracy of the training data. After re-selecting the crack datasets, the overfitting problem can be solved.
### Accuracy Assessment
The performance of our proposed method was evaluated by the following three measures: recall, precision, and F\({}_{1}\)-score. As is shown in Table 1, the recall, precision, and F\({}_{1}\)-score values are all greater than 80.0%. Moreover, all of them increase with the increasing of the iterations. The highest values of the accuracy assessment in this experiment are 95.3%, 81.1%, and 88.2%.
## 4 Conclusions and Recommendations
In this paper, we proposed a CapsNet-based model for pavement crack segmentation for applications of the pavement management system. The proposed methods contain three main steps. They are road segmentation using intensity image generation, data preprocessing, and crack detection using the proposed CapsNet model. Then, the accuracy assessment is presented based on recall, precision, and F\({}_{1}\)-score, which achieved an average score of 95.3%, 81.1%, and 88.2%, respectively. Moreover, the comparison result shows that the higher iteration value can provide a better proposed model, but consume more computational costs and spend more time. The experimental results show an overall high accuracy through the testing process.
In conclusion, the CapsNet is efficient to encode inherent features from MLS point clouds, contributing to effective and accurate pavement crack detection, especially in urban road scenarios. This experiment can be improved by optimizing the methods of imputing variable types of data, especially general remote sensing types. For example, if the BMP format data can be extracted and tested directly, it will be possible to achieve real-time crack detection.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline
**Iteration Times** & **Recall (\%)** & **Precision (\%)** & **F\({}_{1}\) score (\%)** \\ \hline
1 & 88.4 & 80.3 & 81.4 \\ \hline
2 & 93.2 & 80.9 & 87.7 \\ \hline
3 & 95.3 & 81.1 & 88.2 \\ \hline \end{tabular}
\end{table}
Table 1: Results of performance assessment.
Figure 8: Comparison of loss.
Figure 7: Best loss value of the proposed model.
Figure 9: Train Accuracy of the overfitting model.
## References
* [PERSON] et al. (2016) [PERSON], [PERSON], [PERSON], & [PERSON], 2016. Continuous health monitoring of pavement systems using smart sensing technology. _Construction and Building Materials_, 114, 719-736.
* [PERSON] et al. (2016) [PERSON] [PERSON], [PERSON] [PERSON], [PERSON], & [PERSON], 2016. Automatic crack detection on two-dimensional pavement images: An algorithm based on minimal path selection. _IEEE Transactions on Intelligent Transportation Systems_, 17(10), 2718-2729.
* [PERSON] et al. (2011) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON],.. [PERSON] [PERSON] 2011. Adaptive road crack detection system by pavement classification. _Sensors_, 11(10), 9628-9657.
* [PERSON] et al. (2016) [PERSON] [PERSON], [PERSON], [PERSON], & [PERSON] 2016. Region-based convolutional networks for accurate object detection and segmentation. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 38(1), 142-158.
* [PERSON] et al. (2014) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] 2014. Using mobile laser scanning data for automated extraction of road markings. _ISPRS Journal of Photogrammetry and Remote Sensing_, 87, 93-107.
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] 2015. Iterative tensor voting for pavement crack extraction using mobile laser scanning data. _IEEE Transactions on Geoscience and Remote Sensing_, 53(3), 1527-1537.
* [PERSON] et al. (2011) [PERSON], [PERSON], [PERSON], & [PERSON] 2011. Transforming auto-encoders. _In International Conference on Artificial Neural Networks_, pp. 44-51.
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], & [PERSON] 2018. Matrix capsules with EM routing. _In 6 th International Conference on Learning Representations (ICLR)_, 1-15.
* [PERSON] et al. (2018) [PERSON], [PERSON], & [PERSON] 2018. Capsule networks against medical imaging data challenges. _In Intravascular Imaging and Computer Assisted Stenting and Large-Scale Annotation of Biomedical Data and Expert Label Synthesis_, pp. 150-160.
* [PERSON] et al. (2017) [PERSON] [PERSON], [PERSON], [PERSON] [PERSON], & [PERSON] [PERSON] 2017. Deep learning classification of land cover and crop types using remote sensing data. _IEEE Geoscience and Remote Sensing Letters_, 14(5), 778-782.
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] 2020. Deep learning for LiDAR point clouds in autonomous driving: a review. _IEEE Transactions on Neural Networks and Learning Systems_. doi: 10.1109/TNNLS.2020.3015992.
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], L., [PERSON], [PERSON], [PERSON], & [PERSON] 2020. TGNet: Geometric graph CNN on 3-D point cloud segmentation. _IEEE Transactions on Geoscience and Remote Sensing_, 58(5), 3588-3600.
* [PERSON] (1991) [PERSON], 1991. Standardization of distress measurements for the network-level pavement management system. _Pavement Management Implementation_. ASTM International. doi: 10.1520/STIP17817S.
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] 2018. Mobile laser scanned point-clouds for road object detection and extraction: A review. _Remote Sensing_, 10(10), 1531.
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] 2020. Capsule-based networks for road marking extraction and classification from mobile LiDAR point clouds. _IEEE Transactions on Intelligent Transportation Systems_, 22(4), 1981-1995.
* [PERSON] (2004) [PERSON], 2004. _Automated pavement distress collection techniques_, vol. 334. Transportation Research Board. doi:10.1722623348.
* [PERSON] & [PERSON] (2020) [PERSON], & [PERSON] 2020. A cost-effective solution for pavement crack inspection using cameras and deep neural networks. _Construction and Building Materials_, 256, 119397.
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] 2019. Real-time road crack mapping using an optimized convolutional neural network. _Complexity_, 1-17.
* [PERSON] et al. (2018) [PERSON] A., [PERSON], & [PERSON] [PERSON] 2018. Payement distress detection methods: A review. _Infrastructures_, 3(4), 58-76.
* [PERSON] et al. (2017) [PERSON] [PERSON], [PERSON], & [PERSON] 2017. Dynamic routing between capsules. _arXiv preprint_, arXiv:1710.09829.
* system approach to distress detection on CRC pavement. _Journal of Transportation Engineering_, 120(1), 52-64.
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON] 2019. A hybrid capsule network for land cover classification using multispectral LiDAR data. _IEEE Geoscience and Remote Sensing Letters_, 17(7), 1263-1267.
|
isprs
|
A CAPSNETS APPROACH TO PAVEMENT CRACK DETECTION USING MOBILE LASER SCANNNING POINT CLOUDS
|
W. Zhu, W. Tan, L. Ma, D. Zhang, J. Li, M. A. Chapman
|
https://doi.org/10.5194/isprs-archives-xliii-b1-2021-39-2021
| 2,021
|
CC-BY
|
isprs/69389635_5fda_4dd8_90d9_73e9fd4e7242.md
|
Improving spherical photogrammetry using 360\({}^{\circ}\)omni-cameras: use cases and new applications
[PERSON]
1 UNIVPM, Dipartimento di Ingegneria Civile, Edilie e dell' Architettura, 60131 Ancona, Italy (g.fangi, r.piedicca, e.s.maliweymi)@staff.univpm.it
[PERSON]
1 UNIVPM, Dipartimento di Ingegneria dell' Informazione, 60131 Ancona, Italy (d.fangi, r.piedicca, e.s.maliweymi)@staff.univpm.it
[PERSON]
2 UNIVPM, Dipartimento di Ingegneria dell' Informazione, 60131 Ancona, Italy (d.fangi, r.piedicca, e.s.maliweymi)@staff.univpm.it
[PERSON]
1 UNIVPM, Dipartimento di Ingegneria dell' Informazione, 60131 Ancona, Italy ([EMAIL_ADDRESS]
###### Abstract
During the last few years, there has been a growing exploitation of consumer-grade cameras allowing one to capture 360\({}^{\circ}\)images. Each device has different features and the choice should be entrusted on the use and the expected final output. The interest on the technology within the research community is related to its use versatility, enabling the user to capture the world with an omnidirectional view with just one shot. The potential is huge and the literature presents many use cases in several research domains, spanning from retail to construction, from tourism to immersive virtual reality solutions. However, the domain that could the most benefit is Cultural Heritage (CH), since these sensors are particularly suitable for documenting a real scene with architectural detail. Following the previous researches conducted by Fangi, which introduced its own methodology called Spherical Photogrammetry (SP), the aim of this paper is to present some tests conducted with the omni-camera Panono 360\({}^{\circ}\)which reach a final resolution comparable with a traditional camera and to validate, after almost ten years from the first experiment, its reliability for architectural surveying purposes. Tests have been conducted choosing as study cases _Santa Maria della Piazza_ and _San Francesco allele Scahe Charles_ in Ancona, Italy, since they were previously surveyed and documented with SP methodology. In this way, it has been possible to validate the accuracy of the new survey, performed by means an omni-camera, compared with the previous one for both outdoor and indoor scenario. The core idea behind this work is to validate if this new sensor can replace the standard image collection phase, speeding up the process, assuring at the same time the final accuracy of the survey. The experiment conducted demonstrate that, w.r.t. the SP methodology developed so far, the main advantage in using 360\({}^{\circ}\)omni-directional cameras lies on increasing the rapidity of acquisition and panorama creation phases. Moreover, in order to foresee the implications that a wide adoption of fast and agile tools of acquisition could bring within the CH domain, points cloud have been generated with the same panoramas and visualized in a WEB application, to allow a result dissemination between the users.
Spherical Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XIII-2, 2018 ISPRS TC II Mid-term Symposium \"Towards Photogrammetry 2020\", 4-7 June 2018, Riva del Garda, Italy
## 1 Introduction
Spherical Photogrammetry (SP), introduced by [PERSON] more than one decade ago ([PERSON], 2007), ([PERSON], 2010), ([PERSON], 2011) proved to be still a powerful photogrammetric technique to be used for Architectural Heritage (AH) surveys and documentation. It allows one to collect principess images that, once transformed into panoramas, enable the survey to re-built the geometries of a building with a \"point by point\" plotting. This is not trivial, since, weather in case of complex shapes (like sculptures, decorations, mouldings and so on) points clouds are the winning solution, on the contrary, for the representation of more simple geometries (e.g. the facades of a Roman Church), a \"point by point\" plotting is preferable. Besides, the collection of panoramic images of ancient sites might represent a valuable source of documentation in such cases where, due to the war or natural hazards, architectures are seriously damaged or even lost ([PERSON] et al., 2011), ([PERSON], 2015). Of course, SP still have some drawbacks; for instance, the plotting phase, as well as the creation of panoramas, is very time consuming. However, even if the first one is far from being solved and needs further investigation in order to automatise the detection of homologous points, nowadays the technological progress opens up new possibilities to speed up the latter. During the last few years in fact, there has been a growing exploitation of consumer-grade cameras allowing one to capture 360\({}^{\circ}\)images. Among the others, the most affordable are the Ricoh Theta S, LG 360\({}^{\circ}\)CAM, Samsung Gear 360\({}^{\circ}\), Nikon Key mission 360\({}^{\circ}\). Considering more professional devices instead, it is worth to mention the GOPRO OMNI with spherical-head, Sphericam V2 and the Panono 360\({}^{\circ}\)Camera. In this regard, in 2009 a survey test conducted with the Ladybug 3 was presented during the ASTA conference hold in Bari, Italy ([PERSON] and [PERSON], 2009). The tool was composed of 6 cameras, 5 of which with horizontal axis and the sixth with vertical axis. During this experience, it was demonstrated that, despite the low resolution and the offset among shooting and camera centers, it is possible to perform a survey, at the expenses of accuracy. Thanks to the technological enhancement of new tools, the resolution of the final panorama has been strongly increased, reaching 104 MPs for the Panono 360\({}^{\circ}\)camera1. It is composed of 36 cameras, displaced over a sphere of about 15 m of diameter, covering the complete field of view of 360\({}^{\circ}\)degrees. W.r.t. the SP methodology developed so far, the main advantage in using 360\({}^{\circ}\)omni-directional cameras lies on increasing the rapidity of the following three steps of the procedure:
Footnote 1: [[https://www.panono.com](https://www.panono.com)]([https://www.panono.com](https://www.panono.com))
* Acquisition: there is not the need to shoot multiple imageswith a spherical head, representing a great benefit, especially for indoor scenario. Since the camera has a fixed focal length, it is better to avoid shoots with close a foreground and far backgrounds. Besides, since lighting conditions in indoor environments can be impervious, it is suggested to use bracketing or HDR techniques in order to compensate the light exposure.
* Panoramas creation: there is not the need to perform the stitching between multiple images, representing a very time consuming step. A simple vertical straightening is enough for making the panorama suitable for the orientation.
* Verticality: Panono 360 degombeds a verticality sensor which allows one to avoid the time-consuming process of forcing the verticality; in fact, SP works properly just in case panoramas are quasi-horizontal.
The use of this kind of sensors is thus particularly appropriate in those situations where a fast acquisition is compulsory, i.e. in case of emergency documentation (earthquakes, war, accidents, forensic applications and so on).
This research paves the way towards new frontiers of SP. Mobile mapping applications can be easily obtained with a simple walk-around the area where the Cultural Heritage stands. In this way, realizing 360 deg panoramic view with of the surroundings, it is possible to achieve a complete metrical reconstruction of the objects. With SP, the survey can perform a 'point by point' plotting getting in output the wireframe drawing; moreover, exploiting the latest improvement of commercial suites of digital photogrammetry, a Structure from Motion Multi-View Stereo (SfM-MVS) workflow can be performed, creating a dense point cloud using the same panoramas. By this way, the two approaches can be combined and augment the results. A similar application can be found in [17]. Tests have been conducted choosing as study cases _Santa Maria della Piazza_ and _San Francesco alle scale Churches_ in Ancona, Italy, since they were previously surveyed and documented with SP methodology. Hence, the Ground Control Points (GCP) arising from SP have been used to scale the point clouds (created by means of panoramas), and to assess the accuracy of the point cloud itself. To demonstrate the feasibility of this kind of application, the same panoramas have been used to generate the point cloud of the Churches and exporting the final result for a web-based visualization, exploiting the functionalities of _potree_ libraries and, last but not least, to disseminate the knowledge of the output. The tool developed allows the user to navigate the model and to perform measurement, interacting directly with the coloured point cloud.
It can be thus summarized that the main contributions of the article are the following: i) improving the rapidity of acquisition and panorama creation which are fundamental steps to perform SP methodology; ii) exploiting omni-camera for documentation purposes, assessing the reliability of the device for emergency architecture; iii) outlining a pipeline of work which brings from a simple acquisition of 360 deg scenes to the web visualization of point clouds.
## 2 Related Work
Exploiting 3D and geo-spatial information for CH related applications is increasingly among researchers. However, given the needs of collecting images in a quick and affordable way, from this field is becoming paramount to adopt cheaper and more flexible means of both data collection and 3D reconstruction [21]. The answer to these issue is nowadays represented by 360 degimages, which enable one to capture the surrounding world. In addition, the derivation of metric results from spherical images for interactive exploration, accurate documentation and realistic 3D modeling is receiving great attention due to high-resolution contents, large field-of-view, low-cost, easiness, rapidity and completeness of the technique [1]. To date, the most common practice for creating cylindrical or spherical panoramic images is generally entrusted on collecting linear arrays and rotating camera, with very high metric performances, on a panoramic head [12]. In other words, a set of partly overlapped images are shot from a unique point of view with a camera which is rotating around its perspective centre. 3D reconstructions can be then achieved once the panoramas are oriented. In the literature several examples of complete 3D modelling projects based on this mathematical formulation were described in [10], [14]. In these papers, equipere-angular projection were generated from a set of images stitched with a software for panoramic photography (such as PTGui, Autopano, etc.). However, the above mentioned scientific works, highlight that the procedure is very time consuming; besides, the stitching phase lead to the creation of some misalignments that affect the orientation. The solution to such issues is nowadays represented by new sensors that, at the expenses of resolution, enable to collect spherical images with just one shot. In [20] and [21] the topic of exploiting 360 degimages for photogrammetric purposes is faced. The articles attempt to describe a procedure for using images coming from a video sequences, highlighting that the images have a relatively low resolution. Hence, this factor reduces the accuracy of photogrammetric orientations. Another negative factor is the way in which the immersive video frame is created. It is not a perfect spherical panorama, because images that comprise it do not have a common projection centre. However, the advantage of stitched full-spherical images is their number that make the video and the way in which they are recorded densely along a trajectory. Central-perspective original images have limited field of view which causes problems in finding tie points between images, especially in rooms without much details (e.g. walls painted in one colour). The research on the photogrammetric potential of the immersive video sequence, described in this paper, is focused on the following factors that affected the modelling quality:
* the way in which images are stitched into panoramas;
* the choice of the sphere radius which was declared when creating video frames;
* the density of video sequence images.
Figure 1: The Panono 360° with a schematic representation of the workflow used to create panoramas.
In ([PERSON], 2011), a review over the optical 3D measurement sensors and 3D modeling techniques, with their limitations and potentialities, requirements and specifications, is discussed, together with examples of 3D surveying and modeling of heritage sites and objects. In the literature, the work that is closest to the research presented in these pages can be found in ([PERSON] et al., 2017). In this paper, authors perform some experiments with the Samsuma Gear 360\({}^{\circ}\). Even if the sensor is very cheap, resolution does not allow one to achieve the desired accuracy. Moreover, this kind of cameras can shoot a panoramic image exploiting two fisheye lenses that have to be, in any case, stitched one to another. Considering the researches conducted so far, 360\({}^{\circ}\) imaging sensors have huge potential for photogrammetric measurements, even if further studies need to be undertaken, especially on the optimisation of measurement accuracy and its economics. In the following, in line with the most recent research trends, our approach is proposed.
## 3 Materials and Methods
### Spherical Photogrammetry
The photogrammetric outputs based on SP are the result of a first CAD drawing which require further steps in order to achieve the final 3D model. In fact, there are many variables in the workflows due to the possibilities to integrate various techniques. The advantages are the high resolution, the FOV up to 360\({}^{\circ}\), the low cost, the completeness of the information and the high speed of takings photos. On the contrary, the creation of the panoramas, as well as the plotting and the orientation are, up to now, fully manual. Nowadays more accurate and efficient tools and instruments are available for CH recording, such as laser scanning, SLAM and dense multi-view 3D reconstruction, but they are still very expensive. SP was mainly conceived and designed for cultural and architectural metric documentation for low cost solutions and hazard conditions. Following the classical photogrammetric procedure, performing a bundle block adjustment ([PERSON] and [PERSON], 2013) it is possible to start from the SP orientation and to use the 3D modeling tools to create the 3D model basing on the rules of the projective geometry with a method called panoramic image-based interactive modeling. This technique is suitable for the architectural representation because it is a \"point by point\" survey (conversely to dense matching techniques which produces point clouds) and it exploits the geometrical constraints of the architecture's geometry to simplify the 3D modeling process. In other words, this method is appropriate when dealing with well-defined objects that can be reconstructed in the basic geometries on distinct planes; conversely, dealing with complex shapes and surfaces that can not be represented with well defined objects (i.e. archaeological findings), the approximation for finite elements shall be replaced with more complex meshes. Therefore, the surveyor has to comprehend the geometry of the architecture before modelling it. In this approach, the modeling methodology is based on the use of texture mapping techniques in a generic modelling software as virtual projector of an image, and thus to be used to model an architectural object. If the projection centre and the orientation are fixed in the 3D virtual space, objects could be created, moved and modified to match the projections ([PERSON], 2016).
### Data collection and processing
As stated in the previous section (Section 2), image acquisition based on 360\({}^{\circ}\)is not so spread, hence the data acquisition phase deserve a brief explanation. The Panono 360\({}^{\circ}\)camera is a ball-shaped camera containing 36 camera modules, a processor, memory and an accelerometer. It can be used in the following ways: trigger it via a built-in shutter button when holding it by hand or mount it on a tripod or pole and trigger it remotely through a mobile phone application. Once the images have been captured, they are transferred via Wi-Fi or Bluetooth to the connected smartphone where a first low-resolution preview image is stitched. However, to get the final stitched panorama, the original images data has to be sent to Panono's cloud servers, where the stitching of the final output panorama is performed. After the stitching process, the output panorama comes back, being compatible to be visualized with Google Photosphere or similar 360\({}^{\circ}\)visualization tools. Users will also have access to the original images that were captured by the Panono's individual 36 camera modules. By default the entire process is fully automated. However, from the application is possible to control key parameters such as shutter speed, ISO and white balance. The most important feature, fundamental for indoor environment, is the HDR mode.
Without any doubt, the Panono 360\({}^{\circ}\)is the fastest way possible to collect 360\({}^{\circ}\)images existing so far, and this represent the main improvement for SP. Moreover, the fact that all individual frames of the sphere are captured at the same time also means that there are no problems with moving subjects. From our test, there are any ghosting, disappearing limbs, multiple versions of the same person in the image or other artifacts, that are quite usual in most stitched panorama images. The equirectangular panorama has a size of 16384x8192 Px. Each pixel corresponds to 400/16384=0.02 gon.
Example of panorama, for both indoor (in HDR mode) and outdoor scenario, are reported in Figure 2.
Figure 2: Two examples of Panoramas collected with the Panono 360. (a): San Francesco interior. (b): Santa Maria della Piazza exterior.
## 4 Evaluation of Metric Accuracy
Since no previous tests have been conducted for assessing the accuracy for 3D surveying purposes, we set up an accuracy evaluation test conducted inside the laboratories of Universita Politecnica delle Marche. A set of 5 panoramic images of a wall was acquired with the Panno 360\({}^{\circ}\)placed on a tripod. Besides, in order to evaluate if the imposition of the veriucially might affect the final quality of the orientation, the bundle adjustment have been calculated using two sub-set of images: one without the correction of verticality, and one with the correction of verticality, performed using PTGui stitching software. The estimated calibration parameters were then assumed as constant values for a 3D reconstruction project of a straight wall, on which a set of targets was installed and previously measured with a total station. Images were oriented with Sphera (developed by Fangi), a photogrammetric suite specifically designed for SP, using 55 targets points of known coordinates. At a second stage, the orientation was then performed using only 4 fixed points to be used as GCP, and using the remaining ones as check points. In Figure 3 the testing environment (shot with the Pannoo 360\({}^{\circ}\)), together with the photogrammetric orientation, are reported.
Statistics are shown in Table 1 and reveal an _Sigma-zero_ of about 3 Px and 2 Px respectively for the study tests previously described. Residuals of observation can be thus considered equivalent, even if the error is reduced when performing the bundle adjustment with only 4 points of known coordinates. This aspect can be explained with the degree of freedom of the calculation when reducing the number of constrains. In both cases, the results confirm a good metric accuracy for the project.
It was then calculated an F-Fisher test for validating the quality of the statistical sample. F-Fisher test with a significance level \(\alpha=0.01\) shows that is valid the alternative hypothesis, say the two sigma-zeros are significantly different. In any case, the typical error is 2 pixels at best. Results of the LSBF (Least Squares Best Fitting) are depicted in Figure 4, representing the differences from the fixed GCP and the free block. The mean difference average in module is 0.015 m.
For the sake of completeness, it is worth to note that the capturing scenario of the validation test was not performed in its optimal configuration. In fact, the network of GPS constrained the acquisition in only one direction, even if a full 360\({}^{\circ}\)plotting might assure a better quality of the test. The reason because of we used this configuration is due to the fact that the points used for the test where set for camera calibration. Besides, it is even worth to say that, in order to avoid parallax error, panoramas have to be taken at a safety distance from one another; in our case, the distance is acceptable (3 m at worst).
## 5 Results: 3D Modelling and Web Visualization Tool
Results obtained with the described calibration procedure are more than satisfactory and discrepancies between the different tests are negligible. In fact, the residuals of observation between
\begin{table}
\begin{tabular}{|l|l|l|} \hline test & Full GCP & 4 GCP \\ \hline Set 1 & 2.5 x10-3 rad ( 3 pix) & 1.7 x10-3 rad (2 pix) \\ \hline Set 2 vert.corr. & 2.5 x10-3 rad ( 3 pix) & 1.7 x10-3 rad (2 pix) \\ \hline \end{tabular}
\end{table}
Table 1: Results of the test field
Figure 4: Graphical representation of the results of the Least Squares Best Fitting
Figure 3: Testing environment used for the evaluation of metric accuracy. (a): One of the 5 stations, where are visible the markers of known coordinates. (b): Orientation network of the photogrammetric model.
the rectified/not rectified configuration are similar, and performing the bundle adjustment using 4 control points versus using all points of know coordinates gives almost the same results. At this stage, the surveyor has two possibilities, namely performing a \"point by point\" plotting or creating a point cloud of the architecture by using an MVS photogrammetric tool. As stated in the introduction section, the two churches have been already plotted by using SP; in Figure 5 the wireframe drawing of the two Churches is reported.
Some points, coming from the previously SP survey, are used to perform the bundle adjustment of the dense point clouds, as described in the following. To avoid redundancy, the procedure of the experiment will be described using as example only one of the two churches (specifically the _San Francesco_ one); indeed, the data collection phase for outdoor and indoor scenario (apart from the activation of HDR mode) is equal and, in terms of accuracy, no discrepancies have been noticed.
### Orientation and surface extraction from panoramas obtained with Panono 360\"
Image acquisition with the Panono 360\" was carried out by placing the camera on a pole and taking the pictures with the mobile phone application. This way the acquisition is very fast and do not need further post-processing steps. The camera allows even non expert users to collect a sufficient amount of information to retrieve a geometrical reconstruction of a building. It is worth to note that, for indoor scenario, illumination conditions are extremely important, due to the common issue of having uniform lighting conditions in the whole scene. The collected dataset is composed of 14 panoramas, well distributed among the plan of the church (see Figure 6).
The final goal of this phase was to test the performances of MVS tools dealing with spherical images. In fact nowadays it is possible to achieve acceptable results in terms of accuracy from the point clouds extracted with the spherical camera model. Given the good resolution of the panoramas, textures allowed to use dense matching techniques for surface modelling. Image processing was carried out in Agisoft Photoscan (r) enabling the panorama mode. Image orientation with SfM procedure took about 10 minutes, whereas dense matching for point cloud extraction took more than 3 hours, given the high quality of the images. Before to proceed with the creation of the mesh and the subsequent texturing phase, we performed an accuracy assessment of the point cloud by using 30 control points of known coordinates well distributed among the church, computed with SP. This step allowed the scaling of the model in its real scale. Some images of the meshes generated from a dense point cloud of about 1.5 billions points are reported in Figure 7. Statistics about the point cloud computation are reported in Table 2, whilst residuals and total error are reported in Table 3.
The results achieved with this procedure deserve some comments. First of all, residuals are quite high (about 8 cm); the reason should be attributed to the resolution of the panoramas. In fact, even if the image quality is sufficient to perform MVS procedure, the recognition ofologous points in some parts of the church was impervious (especially for those points far from the shooting centre more than 20 m). Moreover, the quality of the mesh is not convincing at all, being a bit rough. Hence, post processing steps like cleaning and decimation are necessary.
\begin{table}
\begin{tabular}{|l|l|} \hline Number of images & 14 \\ \hline Average Flying altitude & 25,3 m \\ \hline Tie points & 18,789 \\ \hline Reprojection error & 9 pix \\ \hline GSD & 1 cm/px \\ \hline \end{tabular}
\end{table}
Table 2: Survey data and processing data.
Figure 5: Wireframe drawing of San Francesco (a) and Santa Maria della Piazza (b), plotted by using Spherical Photogrammetry.
Figure 6: The bottom view of _San Francesco_ church with the positions of the panoramic images used to create the dense point clouds.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Points n\({}^{\circ}\) & X-sqm(m) & Y-sqm(m) & Z-sqm(m) & Total(m) \\ \hline
30 & 0.0432 & 0.0584 & 0.0398 & **0.0829** \\ \hline \end{tabular}
\end{table}
Table 3: This table reports the overall estimated error for the GCP with ellipse shape in X, Y and Z.
### WEB tool for the visualization of point clouds
As the methodology described above demonstrated to be valuable for CH purposes, it is worth to wonder which is the more reliable way to share the information arising from 3D metrical restitution. In fact, whether for insiders it is important to use stand alone tools in order to perform in-depth analysis, it is even important to convey the results of 3D reconstruction using those channels which allow to reach the vast majority of the population, being the CH belonging to the whole mankind. In this regard, the use of WEB solutions should become the paramount media for disseminating CH knowledge. In the last few years, the WebGL framework allowed to overcome the proprietary plug-in issue, and three-dimensional content is slowly becoming a usual type of content for web pages. Let's take, for example, the experience from mobile mapping applications; the documentation with spherical or panoramic photography is getting a very common practice for many kinds of visual applications (e.g. Google Street View, 1001 Wonders, etc.). With the new potential of WEB visualization tools, it is allowed to foresee that in the upcoming years we might assist to a similar experience even for point clouds. The developed tool moves towards this direction, paving the way for a thorough exploitation of 3D metrical information to be spread via WEB.
From Photoscan, the computed dense cloud can be directly exported in _Portee_2 binary format. This data structure can be easily visualised on a web platform based on _Portee_, a free open-source WebGL based point cloud renderer for large point clouds. This visualization is optimized to be accessed by web browsers, with mobile responsive interfaces, thanks to the progressive loading and rendering of two-dimensional height profiles, and point-wise adaptive point sizes, which adjusts the size of each point to the level of detail as additional points are streamed in over time. Main visualization features are:
Footnote 2: [[http://potree.org](http://potree.org)]([http://potree.org](http://potree.org))
* Data navigation in the space with pan, tilt and zooming functions;
* Layer annotation and selection on point cloud data;
* Metric dimensional measurement in distances, surfaces and volumes.
Images of the web tool developed are depicted in Figure 8. For the purpose of this dissertation, we report only some pictures of the _Santa Maria della Piazza_ church, referring interested reader to the link at the bottom of the page for the visualization of both Churches34.
Footnote 3: [[http://goeserver.dii.univp.in/progetto](http://goeserver.dii.univp.in/progetto)]([http://goeserver.dii.univp.in/progetto](http://goeserver.dii.univp.in/progetto))
Footnote 4: [[http://goeserver.dii.univp.in/progetto](http://goeserver.dii.univp.in/progetto)]([http://goeserver.dii.univp.in/progetto](http://goeserver.dii.univp.in/progetto))
## 6 Discussion and Conclusion
The research experiments described in this article are promising. The Panono 360\({}^{\prime}\)proved to be suitable for metric reconstruction; the accuracy values and the error tests described in Section 4, can be considered comparable with a traditional panorama obtained with a standard camera mounted upon a panoramic head. It can be thus stated that, for the upcoming future surveys, this sensor
Figure 8: Web visualization of the point clouds. The tool allows to interact with the points in terms of features and appearance, while a dedicated tool allows the user to infer metric information about the building and the surrounding environment in case of outdoor scenario(a): San Francesco alle Scale. (b): Santa Maria della Piazza.
Figure 7: The interior of the Church after the creation of the dense point cloud and 3D mesh, with the application of textures.
can be a valuable alternative to speed up the process of data collection, at no expenses for the accuracy. The camera is very simple to use, and acquisition can be performed even by non expert users. Being the illumination fundamental for indoor scenario, the camera allows a proper acquisition with the HDR mode. As a conclusion, we can state that the pipeline of work is promising and it is possible to draw some points in favor, listed below:
* the acquisition phase is dramatically speeded up and is very easy, since it can be performed via mobile phone;
* there is not the need to perform the stitching procedure. This aspect lead to a couple of advantages. First of all, stitching is very time consuming. Second, panoramas are ready to use, being geometrically more correct than a panorama created with the dedicated software;
* there is not the need to impose the verticality, so panoramas can be used suddenly once received back from the Panono cloud service;
* it is the optimal solution for architectural heritage, especially for indoor environments;
* the number of Ground Control Points needed is very low, as proved by the laboratory test conducted.
The methodology, however, presents some drawbacks. In fact, the accuracy values from the point cloud cannot be acceptable. In fact, even if the residuals from the GCP are suitable for architectural scales of representation, the reprojection error is quite high, meaning that the point cloud is a bit coarse; hence, it cannot be used from scratch for modelling purposes but needs further operation of post-processing. This aspect will be investigated in the future. Moreover, it would be interesting to evaluate if increasing the number of panoramas will increase the quality of the point cloud. Even this aspect will be further investigated. Finally, the web visualization deserve a brief comment. Regardless the quality of the point cloud, this research paves the way for foreseeing that this kind of fast and agile tools of data collection will be used to extensively collect entire historical centres with the twofold purpose of documenting the CH and of sharing its knowledge with the mankind, not only using the images but also having tools to interrogate the models.
## References
* [PERSON] et al. (2010) [PERSON], [PERSON], [PERSON] and [PERSON], 2010. Automation in multi-image spherical photogrammetry for 3d architectural reconstructions. In: _VAST2010_, pp. 1-6.
* [PERSON] et al. (2017) [PERSON], [PERSON] and [PERSON], 2017. 3d modelling with the samgang gear 360. _ISPRS-International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_ pp. 85-90.
* [PERSON] et al. (2011) [PERSON], [PERSON] and [PERSON], 2011. Spherical photogrammetry as emergency photogrammetry. In: _INTERNATIONAL CIPA SYMPOSIUM_, Vol. 23.
* [PERSON] (2007) [PERSON], 2007. The multi-image spherical panoramas as a tool for architectural survey-xxi international cipa symposium, 1-6 october 2007, atene, isprs international archive- vol xxvi-5c53-s1682-1750-cipa archives vol. _XXI-2007 ISSN_ pp. 0256-1840.
* [PERSON] (2010) [PERSON], 2010. Multiscale multiresolution spherical photogrammetry with long focal lenses for architectural surveys. _International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences_ 38(Part 5), pp. 1-6.
* [PERSON] (2011) [PERSON], 2011. The multi-image spherical panoramas as a tool for architectural survey. _CIPA HERITAGE DOCUMENTATION_.
* [PERSON] (2015) [PERSON], 2015. Documentation of some cultural heritage emergencies in syria in august 2010 by spherical photammetry. _ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences_ 2(5), pp. 401.
* [PERSON] and [PERSON] (2013) [PERSON] and [PERSON] [PERSON], 2013. Photogrammetric processing of spherical panoramas. _The photogrammetric record_ 28(143), pp. 293-311.
* [PERSON] and [PERSON] (2012) [PERSON] and [PERSON], 2012. Notre dame du haut by spherical photogrammetry integrated by point clouds generated by multi-view software. _International Journal of Heritage in the Digital Era_ 1(3), pp. 461-479.
* [PERSON] and [PERSON] (2009) [PERSON] and [PERSON], 2009. Una esperienza di mobile mapping con la fotogrammetria sferica. In: _Atti 13a Conferenza Nazionale ASITA-Bari_, pp. 1035-1040.
* [PERSON] and [PERSON] (2014) [PERSON] and [PERSON], 2014. Photogrammetric applications of immersive video cameras. _ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences_ 2(5), pp. 211.
* [PERSON] and [PERSON] (2015) [PERSON] and [PERSON] [PERSON], 2015. Immersive photogrammetry in 3d modelling. _Geomatics and Environmental Engineering_.
* [PERSON] and [PERSON] (2004) [PERSON] and [PERSON], 2004. 3-d object reconstruction from multiple-station panorama imagery. _International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences_ 34(5/W16), pp. 8.
* [PERSON] et al. (2016) [PERSON] [PERSON], [PERSON], [PERSON], [PERSON] and [PERSON], 2016. Virtual reconstruction of archaeological heritage using a combination of photogrammetric techniques: Huaca arco iris, chan chan, peru. _Digital Applications in Archaeology and Cultural Heritage_ 3(3), pp. 80-90.
* [PERSON] (2011) [PERSON], 2011. Heritage recording and 3d modeling with photogrammetry and 3d scanning. _Remote Sensing_ 3(6), pp. 1104-1138.
* [PERSON] and [PERSON] (2006) [PERSON] and [PERSON], 2006. Web-based 3d reconstruction service. _Machine vision and applications_ 17(6), pp. 411-426.
* [PERSON] (2016) [PERSON], 2016. From spherical photogrammetry to 3d modeling. In: _Handbook of Research on Visual Computing and Emerging Geometrical Design Tools_, IGI Global, pp. 96-115.
* (2018) Any additional supporting data may be appended, provided the paper does not exceed the limits given above.
|
isprs
|
IMPROVING SPHERICAL PHOTOGRAMMETRY USING 360° OMNI-CAMERAS: USE CASES AND NEW APPLICATIONS
|
G. Fangi, R. Pierdicca, M. Sturari, E. S. Malinverni
|
https://doi.org/10.5194/isprs-archives-xlii-2-331-2018
| 2,018
|
CC-BY
|
isprs/fac811ce_43e8_4f37_98d1_89e7f3d109c3.md
|
# Digital Transition Strategies and Training Programs for Digital Curation of Museum
[PERSON]
1 Dept. of Civil and Building Engineering and Architecture, Polytechnic University of Marche, Ancona, Italy - (r.quattrini, r.nespeca)@univpm.it
[PERSON]
1 Dept. of Civil and Building Engineering and Architecture, Polytechnic University of Marche, Ancona, Italy - (r.quattrini, r.nespeca)@univpm.it
[PERSON]
2 Dept. of Science of Antiququities, Sapiezna University of Rome, Rome, Italy - 2
[PERSON]
3 Dept. of Management Science and Technology, University of Patras, Patras, Greece - (kgiotopo.igian)@upatras.gr
[PERSON]
3 Dept. of Management Science and Technology, University of Patras, Patras, Greece - (kgiotopo.igian)@upatras.gr
###### Abstract
Small and medium-sized museums have been particularly impacted by the COVID-19 pandemic, as they often have limited resources and staff to manage the challenges posed by the pandemic. In order for them to survive during the pandemic but also embracing the extensive use of technology in our everyday lives, museums have to adapt to this new reality. The aim of the Museum-Next project is to provide small and medium-sized museums with a new generation of specialised EU professionals working in the Cultural Heritage sector, equipped with a recognised, cross-cutting and high-level digital skillset: the Digital Curators. In the digital age, museum digital curators play a critical role in preserving, organising, and presenting museum collections online. As part of the project, our research performed a desk analysis on the state of the art on museum digital transition strategies and museum digital curator training programs already implemented at EU scale in order to map good practices and tools already existing so as to highlight the current situation and the gaps that may appear in the topic.
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLVIII-M-2-2023
29 th CIPA Symposium \"Documenting, Understanding, Preserving Cultural Heritage:
Humanities and Digital Technologies for Shaping the Future\", 25-30 June 2023, Florence, Italy
## 1 Introduction
Cultural and Creative Sectors (CCS) have been among the most negatively affected since the start of the Covid-19 pandemic. Before, CCS institutions already struggled to respond to the vulnerabilities of this sector, now it has become almost impossible for many of them to continue with their activities. Cultural institutions, especially museums, have indeed faced big challenges over the last years, being closed for prolonged periods or with very reduced access in order to follow the social distancing measures ([PERSON] and [PERSON], 2020). This impossibility of fully experiencing the physical dimension of Cultural Heritage has therefore led to increasing efforts to transfer cultural experiences, products and services in an online format (OSSERVATORI Observatory for Digital Innovation in Cultural Heritage and Activities, 2021) (NEMO Network of European Museum Organisations, 2020). The museums have to rebuild themselves in order to face the new challenges of a post-Covid era, but often they lack resources and concrete possibilities to update themselves and the skills of their staff.
With the return of physical museum audiences, museums might consider this to mean that digital tools are now less relevant, rather than identifying opportunities to strike a measure of equilibrium between the digital and physical going forward ([PERSON], 2021). Instead, the digital revolution has brought about a new era of museum practice, with digital curation being one of the most innovative areas. Such phenomenon has led to the emergence of a new range of cultural experiences, as people interact with physical and virtual reality, through social media platforms or \"phygial\" museum exhibitions ([PERSON] et. al, 2023).
Digital may not be the core skill that museums are run on, but the analysis of how digital skills are deployed evidences digital as an established part of what museums do, and will do in the future, musicologically and as businesses or organisations ([PERSON] et al., 2018).
The work presented here is part of the ERASMUS+ project \"MUSEUM-NEXT: Stimulating digitization at small and medium-sized museums through the enhancement of the Digital Curator\" in the field of Vocational Education and Training. The project partnership consists of six institutions, three museums and three educational institutions, from the Mediterranean area. Specifically, the project partners are Marche Culture Foundation (IT), lead partner, National Museum of Zadar (CR), Museum of Fine Arts of Alicante (ES), Inercia Digital (ES), University of Patras (GR), and Polytechnic University of Marche (IT). The implementation plan of the project first includes the study of the state of the art regarding: a) the digital transformation strategies of museums and b) Digital Curator programs training. This first phase is intended to establish a scenario and raise awareness on potentials and weak points, to be used as a guideline for the design and development of a Pilot Case within the partner museums. At the conclusion of the project, it is expected that a Digital Operation Plan will be formulated to guide museums through the long-term transformation process. Given the fact that the project is currently on-going and about halfway through, the present paper takes advantage of the desk researches carried out for the first Result of the project and tries to depict the current situation in terms of digital transformation strategies as well as in terms of training program for the digital upskilling of museum professionals.
The concept of digital transformation of museums is strictly linked to the term \"digitalization\"; this is not a process of putting museums online but refers to the process of using digital technologies and strategies to enhance the visitor experience, engage audiences, increase accessibility to collections and for back-offee management. To clarify this concept, it is useful to distinguish two related but different processes: \"digitization\" and \"digitalization\". The first is the transition from analog to digital form, while the second involves the use of digital technologies to change a business model and provide new opportunities for revenue and value production. On the contrary digitization takesan analog process and changes it to a digital form without any different-in-kind changes to the process itself ([PERSON] glossary). This couple of terms and the differences in between were originally debated in economics ([PERSON], 2018) and analysing technological trends, while currently in the cultural heritage domain literature emerges highlighting a variety of samples in order to identify different approaches to digitization processes. Moreover, research provided new insights in order to understand how held or accessible digital skills drive decision-making processes regarding the deployment of technologies ([PERSON] et al., 2019) ([PERSON] and [PERSON], 2021). Such digital transformation involves the integration of technology into all aspects of museum operations, including curation, exhibition design, visitor engagement, and marketing ([PERSON] et al, 2021).
A recent report highlights some interesting aspects and provides some useful definitions: digital transformation is more than just problem solving. It is a profound change for an organisation's framework and permits it to survive and thrive in the internet era. In order to depict strategies, a fundamental aspect is the ability to analyse and understand the level of digital maturity of GLAM's institutions. Digital maturity is defined as an individual's or an organisation's ability to use, manage, create and understand digital, in a way that is contextual, holistic and purposeful ([PERSON] et al., 2020). So, the self-assessment became an essential methodology for examining, through the analysis of internal processes, the state of digital maturity of an organisation and its ability to implement technologies and organisational innovations that make the model of management adopted. As an example, the in DICEs project developed the self-assessment tool \"Enumerate\" for the institutions that would like to better understand their digital transformation, digitisation, organisational capacity, user engagement, audience development and collections reuse. This assessment should thus lead to the choice of digital tools to tackle the challenges of open access, social media presence, widening outreach, reaching diverse audiences and stimulating user participation ([PERSON] et al., 2022).
In fact, maturing digital institutions should be focused on integrating digital technologies, such as social, mobile, analytics and cloud, in the service of transforming how their businesses work. In other words, strategy, more than technology itself, drives digital transformation ([PERSON] et al., 2015). In the light of that, it is proven that one of the main drivers of digital transformation in museums is the increasing demand for online and on-site digital experiences. As more people rely on digital technologies for information and entertainment, museums are recognizing the need to adapt to this changing landscape to remain relevant and accessible to audiences ([PERSON], [PERSON], 2022).
As mentioned, one goal of the MUSEUM-NEXT project is therefore to contribute to the training of a new generation of professionals, the Digital Curators ([PERSON] and [PERSON], 2020). This aim combines perfectly with the new declaration that the year 2023 will be the European Year of Skills and the EU Commission will promote upsikilling and reskilling opportunities.
The Digital Curator is essential for the future of museums in the age of Digital Cultural Heritage ([PERSON], [PERSON], and [PERSON], 2019). The Digital Curator can apply the most modern digital tools and resources to a variety of aspects essential for museum management, such as: facility management, cataloguing, management and updating of archives, online dissemination and promotion, creation of new formats and visitor experiences, user support in both physical and digital spaces, among many others ([PERSON], 2009). The role of a digital curator is to manage and curate digital content for online platforms and social media ([PERSON], 2006). Digital curators are responsible for managing and organising digital content, such as images, videos, and other digital media, in a way that is accessible and engaging for online and onsite audiences ([PERSON], 2013). In addition, digital curators are often tasked with developing digital strategies to engage audiences, such as creating online exhibitions, developing interactive experiences, and utilising social media to promote museum collections and events ([PERSON] et al., 2023). They may also work with other museum staff to develop digital preservation policies and ensure that digital content is accessible to audiences ([PERSON], 2017). Overall, the role of a digital curator is crucial in helping museums adapt to the digital age and connect with your audiences, not only online. In ([PERSON], 2020), the tasks of the Digital Curator are:
* interacting with the digital in the museum space;
* curating the museum in the world wide web;
* curating new media art and new media curating art.
Nevertheless, up to date, there is no single, unanimously adopted reference framework for empowering learners with relevant skills, competences and expertise for this career ([PERSON] and [PERSON], 2019).
## 2 Methodology: Towards a state of the art on museums digitalization
For this proposal, a survey and desk analysis on the state of the art on museum digital transition strategies and museum digital curator training programs already implemented at EU scale has been realised. In this way, the starting level from which to imagine an increase in digital values (strategies, specialised personnel, tools) is clarified.
### Museum digital transition strategies
This section aims to offer a comprehensive overview of the best-known museum digital transition strategies already implemented in European scale. The mapping exercise is part of the R1.1a action of the MUSEUM-NEXT project. The criteria adopted aim to gather the most advertised cases where museums offer innovative experience of visiting leveraging digital strategies. The research was focused on the \"big museums\" that in the last decade, especially after the COVID19 pandemic, made significant investments on collections\" digitalization and dissemination using social media and virtual platforms. These action lines succeeded, in most of the cases, producing a significant increase of the visitors' stream.
A specific framework has been followed for recording the data of the museum digital transition strategies. The data collected related to the following format:
* Title/Institution: The names of the best practice (exhibition or developed app) and the involved institution are inserted.
* Location /Period: The city and the country where practice took place were inserted. The year is included as well.
* Identification: This section is divided in four entries:
* \"Description\". A brief description of the main features of the digital practice are inserted;* \"Background\". A brief overview of the state-of-the-art and previous works of the institution are inserted;
* \"Objectives\". The main goals behind the practice are inserted.
* \"Target group\". It specifies if the practice's accessibility is online or on-site.
* Technical Specification: This section is divided in three entries:
* \"Contents\". The contents of the practice are inserted. They could include Images, Videos, Sound-videos, 3D models.
* \"Equipment\". It specifies if the equipment is provided by the institutions (Owned by museum equipment) or not ((Bring Your Own Device);
* HMD), Immersive Virtual Reality (Cave Automatic Virtual Environment
- CAVE), Non-immersive Virtual Reality, Mixed Reality (see-through smart glasses), Augmented Reality (Vision-based), 3D printing, Web/social media, User Guidance
* SWOT (Case-Related): The analysis is specific for the practice.
* SWOT (Technology based): The analysis refers to the implemented technology.
* References: The links to the website or videos concerning the practice are inserted. The bibliography that we used for conducting the SWOT analysis is included as well.
The adopted approach for conducting the SWOT analysis is strictly based on peer-reviewed articles. Since some of the best practices reported in the mapping exercise do not have references in the scientific literature, we decided to provide two typologies of SWOT analysis: \"case-related\" or \"technology-based\". In every case study both entries are included. Whenever there is a lack of proper references, at least one scientific-based analysis for each case is provided.
### Museum digital curator training
At this section, the aim is to offer a comprehensive overview of the museum digital curator training programmes already implemented in European scale and online. The mapping exercise is part of the R.1.1b action of the MUSEUM-NEXT project. The relevant criteria of this mapping were to gather the most known and promoted training programmes offered for museum digital curators in European scale and online also. The research was focused on training programs, seminars, workshops, master degrees or postgraduate programs organised, implemented and/or still implementing during the last decade. The research took place through google and the keywords based on which the search was carried out in the search engines are: \"digital curator\", \"museums\", \"digital curator\", \"digital curatural heritage training\" and so many combinations of the basic concept of the search.
A specific framework was followed for recording the data of the museum digital curator training programs. The data collected related to the following format:
* Identification of training program: Title, Location, Duration, Cost (if any) and Language
* Description (free text from online source)
* Background: Why was the practice started? What problems, needs or issues prompted the action?
* Objectives: What precisely did the initiative set out to do in both the short-and long-term? What were the overall and specific objectives?
* Target groups: The actors are described as well as specific target-group(s) and the direct and indirect beneficiaries of the initiative.
* Results and impact: Direct and indirect results of the practice were described as well as positive and tangible impacts.
* Sources: Images, Videos, Sites, References. Report on the various sources, relevant studies and other references used for the search and development of strategy/program.
* Certification (if applicable)
The results of this research have been collected, evaluated and we produced the following results that picture the museum digital curator training programs. At this point, we should mention that there is not any relevant study with the collection of all museum digital curator training programs in Europe and online so our work is innovative and we can gain important information by its results.
Figure 1: The graphical abstract of the Cultural Heritage Ecosystems and the new professional skills for Digital Curator.
## 3 Results and Discussion
The results of survey and desk analysis on the state of the art on museum digital transition strategies and museum digital curator training programs are exposed below. For the present article, the most significant cases are presented in order to provide a suitable and comprehensive overview. It is possible to examine more in depth the complete list of best practices and training programmes and other useful data, by consulting the R1 document \"_Guidelines for the adoption and proper use of digital technologies_\" which will soon be available on MUSEUM-NEXT official website.
### Museum digital transition strategies
According to the framework designed for the data collection, a total of fifty-eight museum digital experiences were recorded. Subdividing the collected data according to the technology developed, ten types of digital practices can be classified as follows in the figure 2.
As shown, the most frequently used digital tool was 'web' (such as a new website or YouTube channel) (22%), 'cataloging' (25%) and 'virtual tour' (20%). We can hypothesise that the greater prevalence of these three digital practices is due to their ease of implementation and use. In other words, they do not require a high level of digital maturity on the part of both museums and users.
In order to simplify the reading of the data, the target groups can be grouped according to three categories, as shown in the following figure.
Most of the experiences are designed to be enjoyed online, and two of them are dedicated specifically to children. Certainly, this result is due to the increase in digitization during the pandemic period as a response to the forced closure of museums. In fact, looking at the distribution of digital practices over time, despite the effort to collect previous data, the most significant increase detected is after 2019.
Regarding the duration, most of the digital experiences collected are permanent, while five lasted less than a year and four more than a year.
Another interesting result concerns the ownership of the tool used. In the majority of cases (78 %), the digital experience is done through one's own device, a choice that is certainly more cost-effective than having the devices provided by museums and shared during the visit. This is consequent to the greater presence of online experiences among those catalogued.
If it comes to on-site experiences, one advantage is related to the fact that the usability of one's own device is greater; the user does not struggle to use their smartphone or tablet. On the other hand, this finding indicates that, for example, fully-immersive VR HMD experiences, which are not possible with one's own devices, are still not widespread. Moreover, in order to use one's own device, the app to download or the content to browse must be very light, and the museum must have a stable Internet connection.
The evaluations in the SWOT analysis carried out on the specific experiences provide a very interesting critical reading. For example, the \"EGO-TRAP\" installation at the Experimentarium in Copenhagen is one of the first game-like installations (2006) facilitated through interaction with visitors' cell phones, which guide them through the exhibits and provide interactive tests at each one. It is an exhibition which uses mobile technologies as the technical platform for creating Augmented Reality. The
Figure 4: Histogram of research findings on digital transformation strategies of museums, according to the exhibition duration.
Figure 5: Graph of research findings on digital transformation strategies of museums, by type of equipment in use.
Figure 3: Histogram of research findings on digital transformation strategies of museums, according to the Target Group.
Figure 2: Graph of research findings on digital transformation strategies of museums, according to the technologies implemented.
SWOT analysis shows that this exhibit enables a new line action where museums embrace the physical/digital narrative very explicitly, but it will be possible that mobile phones steal all of the attention from existing interactive exhibits and prevent the visitor from interacting with them ([PERSON] and [PERSON], 2019) ([PERSON], 2007). In another case, the experience of using the Mona Lisa VR app of the Louvre Museum, has not only increased the desire to see it applied to other works, but has also increased the intention in users to reuse and recommend its use to others. At the same time, from this extraordinarily successful experience it emerged that the role of technology in integrating with museums is difficult to measure and evaluate ([PERSON], 2019). The British Museum's 1 am Ashurbanipal was extraordinary in many of its display strategies, providing a blueprint for developing an attractive setting for curatorially challenging artefacts, particularly cuneiform tablets and large-scale reliefs. Using imposing set design and simple digital overlays, the curatorial team gave these relatively plain items a greater holding power, increasing their effect and the length of time that visitors spent engaging with them ([PERSON], 2019). In the Casa Batllo Museum, an augmented reality video guide allows you to see the rooms on the device change during the period in which the [PERSON] family resided in the building. All the furnishings are reproduced with an extremely high coherence, as they are recreated from photos and historical material provided by the [PERSON] family. The AR technology used combines the direct and indirect topologies: in some rooms a traditional AR technology is used, in the more restricted ones the application opts for an indirect AR. Although apps enable visitors to have an enriched experience during their visit, this case of study highlights a low level of usability as regards the direct AR application also due to the simplified contents compared to the indirect AR ([PERSON] et al., 2017).
### Museum digital curator training
Based on the research methodology, a total of thirty-three training programs for museum digital curator was recorded. These training programs came from three main categories:
* results from Erasmus+ projects;
* results from other kind of EU funded projects;
* results from seminars.
Specifically, we collected information from: Twenty-five (25) results from Erasmus+ projects; Four (4) results from other kinds of EU funded projects (Leonardo Da Vinci, Interreg, etc); Four (4) results from seminars.
According to the projects'/seminars' description, the content of training programs could be classified into the following topics: Digital marketing, Digital content & publishing; Data protection & open licences; Digital safety, security, and ethics; Digital audience & analytics; Social media; Augmented & Virtual Reality; Mobile apps & mobile user experience; Exhibitions guides, User guides, Pedagogical use of exhibitions guides; Digital Storytelling; Accessible Museums.
The most important reasons that led to the implementation of digital museum educational programs are:
* Increasing disconnection between formal education/training and the world of labour because of the emergence of new job roles and associated skill needs, due to the quickening pace of the adoption of ICT in the museum sector.
* Low emphasis in CCI education on the use of digital technologies, with recent graduates and existing employees lacking important skills.
* Among the over 20.000 museums in Europe, very few have the necessary digital skills (e.g. to create a virtual tour or an e-shop) that could sustain them trouble periods, (e.g. health crisis or other) and accelerate their development in times of growth.
* The recent pandemic proved to be especially challenging for creative industries (CCI) stakeholders.
* as the museum curator, not on the visitor's preferences.
* Many museums in Central Europe, encounter difficulties to be accessible to all due to a lack of organisational knowledge as well as due to limited financial resources, both for investments and adequate promotion (especially small- and medium-sized museums).
The target group that these training programmes offered to are: Current and prospective museum professional (such as administrative, managerial, back office, and front desk employees, curator); Cultural organisations' staff (cultural heritage sites, tourist attractions etc.); People with limited digital skills (especially CCI sector); Train the trainers and their organisations; Stakeholders in creative industries.
The most preferable technique for integration of educational programs is the online method. Beside this fact, we also record five (5) blended courses (which consists of a combination of Face-to-Face learning, on-line and self-study), six (6) mobilities (knowledge transfer, pilot actions) and two (2) online training programs that contain internship.
Most training programs last for 3 to 8 weeks (which are mainly online training programs). Also, the \"Short Training Period\" (less than 5 days) contains programmes which are related to mobility actions. Of course, there are training programs with a training period of six or more months. These are mainly seminars which lead to the acquisition of a certification.
Another important factor is the cost of these training programs. Due to the fact that most training programs are actions related to a European project there is no cost for the participants. The previous fact however, does not apply to seminars that cost from $1,860 to $25,752).
Figure 6: Histogram of research findings on digital curator training, according to the classification of results
Figure 7: Histogram of research findings on digital curator training, according to the mode of study
Regarding the objectives of the training programs, we notice that there are several common elements like:
* Support museum staff in improving their digital competences, in order to become more productive in the new digital era, efficient in collaborating with other professionals and organisations inside and outside of their sector, and successful in managing emerging challenges.
* Training museum professionals in the use of ICT.
* Support museum operations and preserve cultural heritage and local development in a sustainable way.
* Creation (or adoption) of good practices opportunities which can basically influence a museum's new strategic development.
* Increasing the capacities of small and medium size museums, by making them accessible to a wider public of people.
English is the main language for all training activities (for all entries). However, at EU projects some of the Open Educational Resources have been translated in every partner's language. Most open courses/modules have been implemented in MOOC Platforms and contain certification procedures through e-assessment leading to Open Badges and Certificates for digital skills. Each language version was integrated into the universities' own MOOC platforms (in Austria in imoox.at, in Ireland for Academy, Denmark in VBN, in Lithuania in open.ktu) for DigiCulture trainings programmes. Some other options are Virtual Learning Environments (VLE) and Canvas.
There is an increased interest in participating at training programmes: 5:291 people from all over the world enrolled and 1.371 finished successfully receiving their certificate (Mu.Sa courses); 1:587 learners involved to access knowledge, gain new digital skills and intercultural competences, and improve their chances of finding employment or performing better in their current employment through 1.201 open badges and certificates (DigiCulture).
The seminars and training programmes in Digital Curation appear to have common modules with MAs in Museum studies and Cultural Heritage Management (e.g. Johns Hopkins University, 2022). Many of the recorded training programmes/courses were part of EU projects identified as a good practice in the field. The online training programmes/courses are not available after the completion of the EU projects of which they were part of. However, a lot of OER is accessible, such as guides for exhibitions, e-book, videos with subtitles, etc.
## 4 Conclusions
As shown in the present paper, the European scenario currently offers many opportunities oriented on learning, education and training of new professional figures. Many similar projects are currently providing intense agendas where museum personnel are able to acquire or improve new digital skillset (Charter-Alliance, 2022; NEMO, 2020). In particular, the European Commission declared 2023 as the European Year of Skills, stating 'Helping people get the right skills for quality jobs and helping companies, in particular small and medium enterprises ([PERSON], 2023). The present research is in line with this prolific international scenario, aiming to guarantee that Europe has the necessary digital cultural heritage skills to support sustainable societies and economies. Although the first steps of the MUSEUM-NEXT project are here exposed, the research has permitted the implementation of a sound base regarding methodological and technical work packages.
The first part of the research shows a wide range of digital transition experiences. However, they must always be related to the specific local context; in other words, they are not considered as a model to be scaled uncritically in different territories, but as a predictive scenario to be consciously aimed at. Following the implementation of the research on Museum digital curator training, we produced some conclusions and proposals. First of all, it seems that there is a significant positive impact on the end users of the educational activities, on mobility participants and on the institutions. Additionally, it is important to establish a curriculum framework for vocational training in digital curation so more professional and well-structured initiatives on training on museum digital curation will be offered.
In conclusion, the guidelines which include the digital transformation strategy and the skills and competencies that museum staff, provide a useful planning tool, also if must be continuously updated and improved for updating their museums' activities in the new digital era. Furthermore, the results of this research are of great benefit for VET providers as it provides information of what is offered in the \"educational market\" in this specific sector but also it provides info on the lack of specific programs that the market needs in order for the VET providers to design VET qualifications and programs, planning, and executing learning processes, and additional services.
## Acknowledgements
The project presented here is funded under the ERASMUS + project \"MUSEUM-NEXT: Stimulating digitization at small and medium-sized museums through the enhancement of the Digital Curation\", KA220-VET - Cooperation partnerships in vocational education and training. The content of this document reflects the views only of the author, and the programme authorities are not liable for any use that may be made of the information contained therein.
## References
* [American Alliance of Museums. 2017] American Alliance of Museums., 2017. The digital museum: A think guide. Retrieved from [[https://www.aam-us.org/programs/online-resources/digital-museum-resources/](https://www.aam-us.org/programs/online-resources/digital-museum-resources/)]([https://www.aam-us.org/programs/online-resources/digital-museum-resources/](https://www.aam-us.org/programs/online-resources/digital-museum-resources/))
* [[PERSON] et al. 2023] [PERSON]; [PERSON] [PERSON]; [PERSON]; [PERSON] [PERSON]; [PERSON] [PERSON].;
Figure 8: Graph of research findings on digital curator training, by training period
* [PERSON] (2023) [PERSON], 2023. Interactive and Immersive Digital Representation for Virtual Museum: VR and AR for Semantic Enrichment of Museo Nazionale Romano, Antiquarium di Lucrezia Romana and Antiquarium di Villa Dei Quintelli. _ISPRS Int. J. Geo-Inf. 2023_, 12, 28. [[https://doi.org/10.3390](https://doi.org/10.3390) ijgi12002028]([https://doi.org/10.3390](https://doi.org/10.3390) ijgi12002028)
* [PERSON] et al. (2018) [PERSON], [PERSON], [PERSON], & [PERSON], 2018. Mapping the museum digital skills ecosystem phase one report. Leicester: University of Leicester.
* [PERSON] (2006) [PERSON], 2006. Digital curation for science, digital libraries, and individuals. _International Journal of Digital Curation_, 1(1), 3-16. [[https://doi.org/10.2218/ijdc.v1i1.4](https://doi.org/10.2218/ijdc.v1i1.4)]([https://doi.org/10.2218/ijdc.v1i1.4](https://doi.org/10.2218/ijdc.v1i1.4))
* Charter-Alliance (2022) Charter-Alliance, 2022. [[https://charter-alliance.eu/](https://charter-alliance.eu/)]([https://charter-alliance.eu/](https://charter-alliance.eu/)). Accessed April 27, 2023
* Clini and Quattrini (2020) [PERSON], [PERSON], 2020. Umanesimo digitale e bene comune? Linee giuda e riflessioni per una salvezza possibile/Digital humanities and Commons: guidelines and reflections for a possible salvation. _Capitale Cult. Studies on_, 151-175. [[https://doi.org/10.13138/2039-2362/2529](https://doi.org/10.13138/2039-2362/2529)]([https://doi.org/10.13138/2039-2362/2529](https://doi.org/10.13138/2039-2362/2529))
* AR, VR, multisensoral and multi user experiences at the urbino's ducl palace, _Virtual and Augmented Reality in Education, Art, and Museums_. [[https://doi.org/10.4018/978-1-7998-1796-3.ch011](https://doi.org/10.4018/978-1-7998-1796-3.ch011)]([https://doi.org/10.4018/978-1-7998-1796-3.ch011](https://doi.org/10.4018/978-1-7998-1796-3.ch011))
* [PERSON] (2021) [PERSON], & [PERSON], 2021. Aligning market strategies, digital technologies, and skills: Evidence from Italian museums. _Cultural Initiatives for Sustainable Development: Management, Participation and Entrepreneurship in the Cultural and Creative Sector_, 23-44.
* [PERSON] et al. (2019) [PERSON], [PERSON] and [PERSON], 2019. The MUSETECH Model: A Comprehensive Evaluation Framework for Museum Technology. ACM J. Comput. Cult. Herit. 12, 1, Article 7 (February 2019)
* [PERSON] et al. (2023) [PERSON], [PERSON], & [PERSON], 2023. The Role of Digital Curation in Science Teacher Professional Development. In [PERSON], [PERSON], [PERSON], & [PERSON] (Eds.), Theoretical and Practical Teaching Strategies for K-12 Science Education in the Digital Age (pp. 172-193). IGI Global. [[https://doi.org/10.4018/978-1-6684-5585-2.ch010](https://doi.org/10.4018/978-1-6684-5585-2.ch010)]([https://doi.org/10.4018/978-1-6684-5585-2.ch010](https://doi.org/10.4018/978-1-6684-5585-2.ch010))
* [PERSON] (2021) [PERSON], 2021. Thinking Physital: A Museological Framework of Predictive Futures. Museum International, 73(3-4), 156-167.
* [PERSON] and Kennedy (2020) [PERSON], & [PERSON], 2020. The digital transformation agenda and GLAMs. A quick scan report for Europeana. _Culture24, July. https://pro. europeana.eu/files/European_Professional/Publications/Digital, 20_.
* Gartner Glossary (2023) Gartner Glossary, 2023. [[https://www.gartner.com/en/glossary/all-terms](https://www.gartner.com/en/glossary/all-terms)]([https://www.gartner.com/en/glossary/all-terms](https://www.gartner.com/en/glossary/all-terms)). Accessed April 27, 2023
* [PERSON] and [PERSON] (2019) [PERSON], & [PERSON], 2019. _Museums and Digital Culture: New perspectives and research_. [PERSON], [[https://doi.org/10.1007/978-3-319-97457-6](https://doi.org/10.1007/978-3-319-97457-6)]([https://doi.org/10.1007/978-3-319-97457-6](https://doi.org/10.1007/978-3-319-97457-6))
* [PERSON] et al. (2017) [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON], 2017. Combining traditional and indirect augmented reality for indoor crowded environments. A case study on the Casa Ballo Museum. _Computers & graphics_,69, 92-103.
* [PERSON] (2018) [PERSON], 2018. Digital strategy and digital transformation. _Research-Technology Management_, 61(5), 66-71.
* [PERSON] et al. (2022) [PERSON], [PERSON], [PERSON], 2022. The ENUMERATE Self-Assessment Tool: gain insight into your institution's digital transformation, European, Accessed April 28, 2023, [[https://pro.europeana.eu/post/the-enumerate-self-assessment-tool-gain-insight-into-your-institution-s-digital-transformation](https://pro.europeana.eu/post/the-enumerate-self-assessment-tool-gain-insight-into-your-institution-s-digital-transformation)]([https://pro.europeana.eu/post/the-enumerate-self-assessment-tool-gain-insight-into-your-institution-s-digital-transformation](https://pro.europeana.eu/post/the-enumerate-self-assessment-tool-gain-insight-into-your-institution-s-digital-transformation))
* Johns Hopkins University (2022) Johns Hopkins University, \" MA in Museum Studies -- Johns Hopkins Advanced Academic Programs \", Johns Hopkins Advanced Academic Programs, Accessed December 04, 2022, [[https://advanced.jhu.edu/academics/graduate/ma-museum-studies/](https://advanced.jhu.edu/academics/graduate/ma-museum-studies/)]([https://advanced.jhu.edu/academics/graduate/ma-museum-studies/](https://advanced.jhu.edu/academics/graduate/ma-museum-studies/))
* [PERSON] (2007) [PERSON], 2007. Brave new world: Mobile phones, museums and learning-how and why to use Augmented Reality within museums. Nordisk Museologi, (1), 3-3.
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON], [PERSON], & [PERSON], 2015. Strategy, not technology, drives digital transformation. _MIT Sloan Management Review_.
* [PERSON] (2020) [PERSON], 2020. Museum curation in the digital age. In The future of creative work (pp. 123-139). Edward Elgar Publishing.
* Madrid (2013) Madrid, M. M., 2013. A study of digital curator competences: A survey of experts. The International Information & Library Review, 45(3-4), 149-156.
* NEMO Network of European Museum Organisations (2020) NEMO Network of European Museum Organisations, 2020. Final report Digitization and IPR in European Museums. [[https://doi.org/https://www.ne-mo.org/fileadmin/Daetien/public/Publications/NEMO_Final_Re_Opt_Digitisation_and_IPR_in_European_Museums_WG_07.20](https://doi.org/https://www.ne-mo.org/fileadmin/Daetien/public/Publications/NEMO_Final_Re_Opt_Digitisation_and_IPR_in_European_Museums_WG_07.20)]([https://doi.org/https://www.ne-mo.org/fileadmin/Daetien/public/Publications/NEMO_Final_Re_Opt_Digitisation_and_IPR_in_European_Museums_WG_07.20](https://doi.org/https://www.ne-mo.org/fileadmin/Daetien/public/Publications/NEMO_Final_Re_Opt_Digitisation_and_IPR_in_European_Museums_WG_07.20)) 20.pdf
* OSSERVATORI Observatory for Digital Innovation in Cultural Heritage and Activities (2021) OSSERVATORI Observatory for Digital Innovation in Cultural Heritage and Activities, 2021. Digital innovation of Italian museums in 2021 (L'innovazione digitale nei musei Italiani nei 2021). [[https://doi.org/https://www.osservator.net/it/prodotti/formato/report/innovazione-digitale-musei-italiani-2021-report](https://doi.org/https://www.osservator.net/it/prodotti/formato/report/innovazione-digitale-musei-italiani-2021-report)]([https://doi.org/https://www.osservator.net/it/prodotti/formato/report/innovazione-digitale-musei-italiani-2021-report](https://doi.org/https://www.osservator.net/it/prodotti/formato/report/innovazione-digitale-musei-italiani-2021-report))
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], 2021. Before You Visit:- New Opportunities for the Digital Transformation of Museums. In: [PERSON] (eds) Culture and Computing. Interactive Cultural Heritage and Arts. HCII 2021. Lecture Notes in Computer Science(c), vol. 12794. [PERSON]. [[https://doi.org/10.1007/978-3-030-77411-0_29](https://doi.org/10.1007/978-3-030-77411-0_29)]([https://doi.org/10.1007/978-3-030-77411-0_29](https://doi.org/10.1007/978-3-030-77411-0_29))
* [PERSON] (2019) [PERSON], 2019. The 'Mona Lisa' Experience: How the Louvre's First-Ever VR Project, a 7-Minute Immersive da Vinci Odyssey, Works.viewed 17 Jun 2021, [[https://news.artnet.com/art-world/louvre-embraced-virtual-reality-leonardo-blockbuster-1686169](https://news.artnet.com/art-world/louvre-embraced-virtual-reality-leonardo-blockbuster-1686169)]([https://news.artnet.com/art-world/louvre-embraced-virtual-reality-leonardo-blockbuster-1686169](https://news.artnet.com/art-world/louvre-embraced-virtual-reality-leonardo-blockbuster-1686169)).
* [PERSON] et al. (2019)- from the ground up. University of Leicester. Journal contribution.
* [PERSON] (2020) [PERSON] and [PERSON]
- [PERSON], 2020. Virtual Reality in museums: exploring the experiences of museum professionals, _Applied Sciences_; Basel Vol. 10, Iss. 11
* [PERSON] and Baraldi (2022) [PERSON] and [PERSON], 2022. Museums and digital technology: a literature review on organisational issues, _European Planning Studies_, 30:9, 1676-1694, DOI: 10.1080/09654313.2021.2023110
* [PERSON] (2009) [PERSON], 2009. Emerging convergence? Thoughs on museums, archives, libraries, and professional training, Museum Management and Curatorship, 24:4, 369-387
* [PERSON] (2019) [PERSON], 2019. Of Oil and Antiquities, Cuneiform and Kings: A Review of the British Museum&aposs &apossl am Ashurbanipal&apos Exhibition (8 Nov. 2018-24 Feb. 2019). Ancient Near Eastern Studies, 56, 367-387.
* [PERSON] (2023) [PERSON], 2023. EU lack of labour won't be solved by skills alone: Improving job quality is key.
|
isprs
|
DIGITAL TRANSITION STRATEGIES AND TRAINING PROGRAMS FOR DIGITAL CURATION OF MUSEUM
|
R. Nespeca, R. Quattrini, U. Ferretti, K. Giotopoulos, I. Giannoukou
|
https://doi.org/10.5194/isprs-archives-xlviii-m-2-2023-1127-2023
| 2,023
|
CC-BY
|
isprs/ae5478cb_6fbb_4ccd_be2b_a8b1cf866395.md
|
# Accuracy evaluation of ICESat-2 ATL08 in Finland
[PERSON]
1 State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan, China- (wangxu0920, xinlian.liang) @whu.edu.cn
[PERSON]
1 State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan, China- (wangxu0920, xinlian.liang) @whu.edu.cn
###### Abstract
ATL08 is level-3A land and vegetation height product of ICESat-2 data, which recorded the terrain elevation and surface height parameters at 100 m fixed-length along the ground track. It has been used widely not only in vegetation monitoring at large scale, but also in improving the accuracy of satellite stereo mapping, so, the accuracy of ATL08 product is the precondition of its application. Although some studies have evaluated the terrain and canopy height retrieval accuracy of ATL08, it is still not comprehensive that most studies focus on the accuracy in vegetation area. In this study, the performance on terrain and surface height retrieval of ATL08 was evaluated both in forested and non-forested area based on DEM and ALS data in Finland. A total of 6,682,846 and 3,980,235 segments were used to evaluate the accuracy of terrain and surface height retrieval, respectively. The result showed that, firstly, terrain elevation retrieval from ICESat-2/ATLAS is accurate, e.g., sub-meter level both in forested and non-forested area and slope was an important factor affecting the accuracy of terrain retrieval, while, surface height retrieval was low accurate, e.g., the RMSE of surface height retrieval were more than 5 m. Secondly, the accuracy of terrain elevation retrieval in non-forested area was slightly higher than that in forested area showed by MAE and RMSE. Finally, the accuracy of surface height retrieval in forest area was higher than that in non-forested area, and it should be emphasized that there is a significantly difference on RMSE% between forested and non-forested area. The study indicated that the terrain elevation retrieval from ATL08 is relative reliable and recommended to use when the accuracy is not strictly required, while, the use of surface height should be considered carefully and avoid to use low accuracy observation.
1
Footnote 1: Corresponding author
## 1 Introduction
Ice Cloud and land Elevation Satellite-2 (ICESat-2) was launched in September, 2018. At present, the onboard Advanced Topographic Laser Altimeter System (ATLAS) has collected nearly 5 years of Earth Observation (EO) data at a global scale. The data collected by ATLAS was processed into different levels of products, among which the Land and Vegetation Height (ATL08) Product estimated the height of terrain and canopy in a 100m fix-length along the ground track ([PERSON] et al., 2021). The ATL08 product can be used to estimate the state of vegetation at regional or national scales, independently or fused with other remote sensing data ([PERSON] and [PERSON], 2022; [PERSON] et al., 2022; [PERSON] et al., 2023; [PERSON] et al., 2021; [PERSON] et al., 2019; [PERSON] et al., 2021; [PERSON] and [PERSON], 2022), or be used to improve the accuracy of satellite stereo mapping with no ground elevation control points ([PERSON] et al., 2021; [PERSON] et al., 2022; [PERSON] et al., 2022).
The accuracy of ATL08 data is the foundation of its application. Therefore, a few previous studies evaluated the accuracy of terrain and canopy height and analysed their impact factors. ([PERSON] et al., 2020) quantified the accuracy of terrain and canopy height collected by ICESat-2/ATLAS in southern Finland. ([PERSON] et al., 2021) validated the accuracy of terrain and canopy retrievals for Global Ecosystem Dynamic Investigation (GED) and ICESat-2 ATL08 product in 40 sites distributed in the U.S. ([PERSON] and [PERSON], 2021) evaluated ATL08 terrain and canopy height agreement with reference data in U.S., across the 12 sites including 6 major biomes. These studies reported the capabilities and limitations of ATL08 product in forested areas at large scale and different kinds of ecosystems.
ATL08 product also was demonstrated that can be used as a height reference for the global digital elevation models ([PERSON] et al., 2022). The accuracy of terrain retrieval was evaluated dependently in some studies. ([PERSON] et al., 2019) evaluated terrain retrieval accuracy from ATL03 data in U.S, and the factors, including signal-to-noise ratio (SNR), slope, vegetation height and vegetation cover, affecting the accuracy of terrain retrieval were analysed under different kinds of land cover. ([PERSON] et al., 2022) evaluated the accuracy of terrain retrieval derived from ATL08 product in Spain using 24 months observation. ([PERSON] et al., 2022) points out that ICESat-2 data has potential that can be used in the urban change monitoring and three-dimensional morphology. These studies validated the terrain retrieval accuracy derived from ATL08 product and provided the recommendations for the use and application of ICESat-2 data.
Although the above studies evaluated the accuracy of terrain and canopy derived from ATL08 data in different regions, the performance of ATLAS sensor has not been adequately evaluated. For example, the accuracy of terrain and surface height in non-forested area was rarely considered.
This study evaluated the accuracy of the terrain and surface height from ATL08 data in the forest and non-forest covered areas in Finland, in order to reveal the reliability of the satellite Lidar observations and its different properties over different land-cover conditions.
In this study, the surface was used instead of the canopy to describe the relative height to the ground under all types of landcover and land use, although the relative height to the ground has been conventionally defined as canopy in the ATL08 product no matter what type of land cover it is.
## 2 Material and Methodology
### Study area
In this study, ATL08 data in Finland was used to evaluate the accuracy of terrain and surface height. Finland is selected as the study area because, firstly, ICESat-2 has a relatively dense footprint coverage in Finland in comparison with low-latitude regions since the ICESat-2 has a near-polar orbiting orbit and Finland locates in the high latitude region. Secondly, Finland has good reference data for the evaluation of satellite Lidar data. Airborne laser scanning (ALS) data and digital elevation model (DEM) data cover the whole country. Finally, previous studies ([PERSON] et al., 2020) have been conducted in southern Finland which can be compared with the result of our study and verify the conclusion.
### ATL08 product and pre-processing
ICESat-2 ATL08 product (Version 4) recorded terrain elevation, surface (canopy) height and other parameters such as ancillary data, satellite orbit and quality assessment, et al.
ATL08 data requires a pre-processing step for the accuracy evaluation, including the selection of cloud-free observations, coordinate reference and elevation datum transformation, and segment boundary calculation.
Firstly, the same terrain and surface (canopy) height indicators derived from ATL08, e.g., _h te median_ and _h canopy_, were selected, to compare with previous study ([PERSON] et al., 2020). In a segment, _h te median_ is the median height of terrain photons, and _h canopy_ is the 98% height of surface (canopy) photons. Due to the influence of observation conditions, there were possible invalid values that should be eliminated. The _layer flag_ in ATL08 product that represents the existence of clouds or blowing snow was used to filter out data with cloudy observation.
Secondly, since the datum of ATL08 product and reference data were different, the datum of ATL08 product was strictly transform to the datum from the WGS84 to EUREF-FIN35 system based on the Nordic Geodetic Commission 2020 (NKG2020) transformation. The officially recommended FIN2005N00 geoid model ([PERSON], 2010) was used to transform ellipsoidal EUREF-FIN heights to N2000.
Finally, although the segment width varied in different studies, for comparison with previous studies ([PERSON] et al., 2020), this study used the 11m segment width. The next and current segment in the same ground track were used to calculate the slope of the segment. Based on the size, slope and position of the segment, the coordinate of the segment boundary can be calculated.
After pre-processing described above, ATL08 data was divided into forest cover class and non-forest cover class for evaluating the accuracy of ATL08 product in forested area and non-forested area. The _segment landcover_ parameter, a flag indicating the land cover of segment in ATL08 product, was used to classify the segments into forest cover class and non-forest cover class. The value of _segment landcover_ from 0 to 16 means 17 kinds of land cover. In this study, evergreen needleleaf forest, evergreen broadleaf forest, deciduous headleaf forest, deciduous wooded wasumas indicated by _segment landcover_ were considered as a forest cover class, and other land cover classes besides the above were considered as a non-forest cover class.
### Reference dataset and pre-processing
DEM data and ALS data distributed by National Land Survey (NLS 2018) in Finland were selected as reference data to evaluate the accuracy of terrain and surface height derived from ATL08, respectively. DEM product from NLS with 2m spatial resolution was used to evaluate the accuracy of terrain retrieval from ICESat-2 data, since ALS data in some regions were missing.
Outliers in ALS data were firstly removed based on point-distribution-analysis method ([PERSON] et al., 2011) along elevation distribution. Digital surface model (DSM) and DEM were then generated from ALS data. The normalized digital surface model (nDSM) was generated by subtracting DEM from DSM was used to evaluate the accuracy of surface height retrieval with 2m spatial resolution.
The reference data were sampled as same method as ATL08 product, in order to compare with terrain and surface height derived from ATL08. For terrain height, the median value of elevation extracted in DEM in a segment was regard as reference value to ATL08 median terrain elevation (_h te median_). For surface height, the 98% height was extracted in nDSM in a segment was regard as reference value to ATL08 surface height (_h canopy_).
### Method of accuracy evaluation
The errors of the terrain and surface height defined in this study was the observation value minus the reference value, as shown in (1) and (2).
\[e_{\textit{model}}=h\_te\_median_{\textit{ ATL08}}-h\_te\_median_{\textit{ model}} \tag{1}\] \[e_{\textit{model}}=h\_canopy_{\textit{ ATL08}}-h\_canopy_{\textit{ ATL08}} \tag{2}\]
Three indicators including bias, mean absolute error (MAE), root mean square error (RMSE) were selected in this study to evaluate the accuracy of terrain and surface height. For the evaluation of the surface height accuracy, the relative RMSE (RMSE%) was also selected. The formulas of accuracy indicators were shown in (3) to (6).
\[\textit{bias}= \frac{1}{n}\times\sum_{i=1}^{n}e_{i} \tag{3}\] \[\textit{MAE}= \frac{1}{n}\times\sum_{i=1}^{n}e_{i}\mid\] (4) \[\textit{RMSE}= \frac{\sum_{i=1}^{n}\sum_{i=1}^{n}e_{i}^{2}}{\sum_{i=1}^{n}e_{i}^{ 2}}\] (5) \[\textit{RMSE}\%= \frac{\textit{RMSE}}{\textit{h}_{\textit{LAS}}}\times 100\% \tag{6}\]
## 3 Result
### Terrain accuracy
Gross errors need to be eliminated due to the difference between ATL08 and reference data collection times and some unknow factors. In this study, observation with error distributed outside three times the standard deviation was excluded. Finally, the range of terrain errors was from -3.91m to 3.78m and 6,682,846 segments were selected to evaluate the accuracy of ATL08 terrain retrieval, in which 81.5% and 18.5% of segments were from forest and non-forest classes, respectively. The error distributed was shown in Figure 1.
In order to further evaluate the accuracy of terrain retrieval in forested, non-forested area and the whole study area, three accuracy indicators, including bias, MAE, and RMSE, were calculated which were listed in Table 1. The bias, MAE, and RMSE of the terrain height over the whole study area were 0.0 m, 0.46 m, 0.72 m, respectively.
In forested area, the bias, MAE, and RMSE of terrain retrieval were -0.02 m, 0.47 m, and 0.73 m, respectively, and the observations number are about 5.45 million. Table 1 also listed the results from the reference study ([PERSON] et al. 2020) as it had a similar study area and forested area definition. It should be noted that the definition of errors in reference study is opposite to this study. The definition of errors in reference study was reference value minus observation value, while, that in this study following the definition in surveying that observation value minus reference value. This point needs to be noted especially when comparing the bias between two evaluation results. In non-forested area, the bias, MAE, and RMSE of terrain retrieval were 0.07 m, 0.44 m, 0.67 m, respectively, and the observation number was about 1.24 million.
### Surface accuracy
A total of 3,980,235 segments were selected to evaluated the accuracy of ATL08 surface height retrieval. The distribution of surface retrieval error was shown in Figure 2. For the evaluation of surface height, RMSE% was supplied as an accuracy indicator, since it represents the percentage of the deviation between observation value and true value to true value in surface height retrieval.
The segment number in the surface accuracy evaluation was notably smaller than the terrain evaluation. The number of valid values is different in surface height and terrain elevation retrieval. That is the reason for the different observations number of terrain and surface in the evaluation result.
Four accuracy indicators, including bias, MAE, RMSE, RMSE%, were calculated to further evaluated the accuracy of surface height retrieval in forested, non-forested and the whole area, as listed in Table 2.
The bias, MAE, RMSE and RMSE% of the surface height over the whole study area were -0.36 m, 3.51 m, 5.12 m and 36.82%, respectively, and the observation number was about 3.98 million. In forested area, the bias, MAE, RMSE and RMSE% were -0.53 m, 3.46 m, 5.07 m and 34.91%, respectively, and the observation number was about 3.59 million. In the non-forested area, the bias, MAE, RMSE and RMSE% were 1.21 m, 3.98 m, 5.57 m and 67.47%, respectively, and the observation number was about 0.39 million.
## 4 Discussion
### Terrain accuracy
The accuracy of terrain retrieval in forested are in this study was compared with a reference study carried out in southern Finland ([PERSON] et al. 2020). Although the amount of the observations in this study was significantly larger than, i.e., the observations number was about 6 times as that in reference study and the duration of data collection in this study was 3 times as that in reference study, the accuracy of the terrain height in forested areas was similar as the reference study that the differences of bias and MAE were 9 cm and 6 cm, respectively, and the RMSE of forest class in this study is same as reference study, i.e., both were 0.73 m.
\begin{table}
\begin{tabular}{c|c|c|c|c} \hline Indicators & Total & Forest & Non-Forest & Reference \\ \hline Bias (m) & 0.00 & -0.02 & 0.07 & -0.07 \\ MAE (m) & 0.46 & 0.47 & 0.44 & 0.53 \\ RMSE (m) & 0.72 & 0.73 & 0.67 & 0.73 \\ Observation & & & & \\ (million) & 6.68 & 5.45 & 1.24 & 0.91 \\ \hline \end{tabular}
\end{table}
Table 1: Accuracy of ATL08 terrain height
Figure 1: The distribution of the terrain errors stratified by forested and non-forested area.
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Indicators & Total & Forest & Non-Forest \\ \hline Bias (m) & -0.36 & -0.53 & 1.21 \\ MAE (m) & 3.51 & 3.46 & 3.98 \\ RMSE (m) & 5.12 & 5.07 & 5.57 \\ RMSE\% & 36.82 & 34.91 & 67.47 \\ Observation & & 3.98 & 3.59 & 0.39 \\ (million) & & & \\ \hline \end{tabular}
\end{table}
Table 2: Accuracy of ATL08 surface heightSince forest coverage in Finland is more than 75% of land area (Finland, 2022), the observation number in forested area is about 4 times as that in non-forested area in terrain accuracy evaluation. Although the accuracy of the terrain retrieval in ATL08 product was accurate, the accuracy in non-forested area was slightly higher than that in forested area. The reason for this phenomenon may be that the forest covered the ground, reducing the accuracy of terrain retrieval in forested area.
Slope is another important factor to terrain retrieval accuracy. In order to reveal the impact of slope on terrain retrieval, observations were divided into two level based on slope from ATL08 product that segment slope less than 15\({}^{\circ}\) was considered in gentle terrain, and segment slope greater than 15\({}^{\circ}\) was considered in steep terrain.
The three accuracy indicators, including bias, MAE and RMSE, were calculated in two slope levels separately in the whole study area which were shown in Figure 3. The result showed that the accuracy of terrain retrieval in gentle ground was obviously higher than that in steep ground. Due to the dominant number of observations in gentle ground, the evaluation result in the whole study area was similar to that in flat areas.
### Surface accuracy
The accuracy of surface height retrieval was obviously lower than that of terrain retrieval. It can be clearly seen in the error distribution in Figure 1 and Figure 2 that the error of terrain retrieval was roughly within the range of -4 m to 4 m, while that of surface height retrieval was roughly within the range of -25 m to 25 m. In addition, terrain retrieval was more concentrated than the error distribution of surface height retrieval.
The accuracy in forested area was higher than that in non-forested area. Surface heights in forested and non-forested area were underestimate and overestimated, respectively. The canopy was underestimated 0.53m in the forested area which may be caused by the top of canopy photons missing sampled in forested area due to the ATLAS sampling method ([PERSON], 2016), while, the surface height was overestimated 1.21m in non-forested area, and the reasons for this phenomenon need further explorations.
Another point that is worthy of discussing is the RMSE and RMSE% in surface height retrieval between forested and non-forested area. Although the difference of RMSE between the whole, forested, and non-forested areas were not significant, the RMSE% in non-forested area was far greater than that in forested area, approximately 33%. This indicated that the average surface height in non-forested area is much lower than that in forested area.
Due to the high forest coverage in Finland, the observation number in forested area was 9 times as that in non-forested area. Despite the difference of RMSE% between forested and non-forested area was huge, the RMSE% of the whole study area was similar with the forested area as dominant observations were in forested area.
## 5 Conclusion
This work evaluated the performance of ATL08 product in terrain and surface height retrieval under different land cover and land use in Finland. 33 months ATL08 data was used to evaluate the accuracy of terrain and surface height retrieval stratified by forested and non-forested area. Evaluation results in reference study were also used to compared with that calculated in this study.
The results of this work indicated that the terrain height retrieval from ICESat-2 Lidar is accurate, i.e., at sub-meter level, in both forest and non-forest areas. This indicated that terrain retrieval from ATL08 is relative reliable and recommended to use when the accuracy is not strictly required. Although terrain retrieval is accurate from ATL08, the accuracy in non-forested area was slightly higher than that in forested area. This may be caused by the coverage of forest, reducing the accuracy of terrain retrieval in forested areas. This study also reported that the slope is an important factor affecting the accuracy of terrain retrieval. The terrain retrieval error was significantly higher in areas with a slope greater than 15\({}^{\circ}\) than that with a slope less than 15\({}^{\circ}\).
Figure 2: The distribution of surface error stratified by forested and non-forested area.
In addition to evaluate the accuracy of the terrain height measurements, this work also quantitatively evaluated the accuracy of the surface height retrieval and revealed that the surface height measurements are less accurate than terrain measurements, i.e., the RMSE of the surface height in the forested, non-forested, and whole study area was similar at 5.07 m, 5.57 m and 5.12 m, respectively.
For terrain retrieval, observations collected in flat area was recommended to use since its high accuracy. For surface height retrieval, it is worth of noting that the accuracy of surface height retrieval was highly affected by observation conditions. It should be avoided the low accuracy observation when using surface height retrieval form ATL08.
## Acknowledgements
This study was substantially supported by the Project of Background Resources Survey in Shennomogia National Park (SNINP2023015); Open Project Fund of Hubei Provincial Key Laboratory for Conservation Biology of Shennomogia Snub-nooskorkeys (SNIGKL2023015); Wuhan University (grant WHUZZJJ202220).
## References
* [PERSON] (2010) [PERSON], 2010: Development of the Finnish Height Conversion Surface FIN2005N00. _Nordic Journal of Surveying and Real Resate Research_, 2010, 76-88.
* Finland (2022) Finland, N.R.I.o., 2022: Finnish Forest Statistics. In: Natural Resources Institute of Finland.
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], H., [PERSON], S., [PERSON], X., [PERSON], H., [PERSON], X., 2021: A method of extracting high-accuracy elevation control points from ICESat-2 altimetry data. _Photogrammetric Engineering & Remote Sensing_, 87, 821-830.
* [PERSON] et al. (2011) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2011: Automatic stem mapping using single-scan terrestrial laser scanning. _IEEE Transactions on Geoscience and Remote Sensing_, 50, 661-670.
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], X., [PERSON], [PERSON], 2021: Performance evaluation of GEDI and ICESat-2 laser altimeter data for terrain and canopy height retrievals. _Remote Sensing of Environment_, 264.
* [PERSON] and [PERSON] (2022) [PERSON], [PERSON], [PERSON], [PERSON], 2022: Estimation of biomass burning emissions by integrating ICESat-2, Landsat 8, and Sentinel-1 data. _Remote Sensing of Environment_, 280, 113172.
* [PERSON] et al. (2022) [PERSON], [PERSON], [PERSON], Y., [PERSON], T., [PERSON], Q., [PERSON], B., [PERSON], Y., [PERSON], H., [PERSON], Z., [PERSON], J., [PERSON], Q., 2022: Neural network guided interpolation for mapping canopy height of China's forests by integrating GEDI and ICESat-2 data. _Remote Sensing of Environment_, 269, 112844.
* [PERSON] et al. (2023) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], B., [PERSON], [PERSON], 2023: Mapping the Forest Height by Fusion of ICESat-2 and Multi-Source Remote Sensing Imagery and Topographic Information: A Case Study in Jiangxi Province, China. _Forests_, 14, 454.
* [PERSON] and [PERSON] (2021) [PERSON], [PERSON], [PERSON], 2021: Assessing the agreement of ICESat-2 terrain and canopy height with airborne lidar over US ecozones. _Remote Sensing of Environment_, 266.
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2021: Mapping forest height and aboveground biomass by integrating ICESat-2, Sentinel-1 and Sentinel-2 data using Random Forest algorithm in northwest Himalayan flowflights of India. _Geophysical Research Letters_, 48, e2021 GL093799.
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2019: Estimating aboveground biomass and forest canopy cover with simulated ICESat-2 data. _Remote Sensing of Environment_, 224, 1-11.
* [PERSON] et al. (2020) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2020: Validation of ICESat-2 terrain and canopy heights in boreal forests. _Remote Sensing of Environment_, 251.
* [PERSON] and [PERSON] (2016) [PERSON], [PERSON], [PERSON], [PERSON], 2016: The Potential Impact of Vertical Sampling Uncertainty on ICESat-2/ATLAS Terrain and Canopy Height Retrievals for Multiple Ecosystems. _Remote Sensing_, 8.
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2021: ATLAS/ICESat-2 L3A Land and Vegetation Height, Version 4, In. Boulder, Colorado USA.: NASA National Snow and Ice Data Center Distributed Active Archive Center.
* National Land Survey (2018) National Land Survey, 2018: Open data file download service. In [[https://www.mannittauslaits.fi/en/e-services/open-data-file-download-service](https://www.mannittauslaits.fi/en/e-services/open-data-file-download-service)]([https://www.mannittauslaits.fi/en/e-services/open-data-file-download-service](https://www.mannittauslaits.fi/en/e-services/open-data-file-download-service)).
* [PERSON] et al. (2022) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2022: The ATL08 as a height reference for the global digital elevation models. _Geo-spatial Information Science_, 1-20.
* [PERSON] et al. (2022) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2022: Extraction Strategy for ICESat-2 Elevation Control Points Based on ATL08 Product. _IEEE Transactions on Geoscience and Remote Sensing_, 60, 1-12.
* [PERSON] et al. (2021) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2021: Fusing simulated GEDI, ICESat-2 and NISAR data for regional aboveground biomass mapping. _Remote Sensing of Environment_, 253.
Figure 3: Terrain retrieval accuracy stratified by slope
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2019: Ground elevation accuracy verification of ICESat-2 data: a case study in Alaska, USA. _Optics Express_, 27, 38168-38179.
* [PERSON] and [PERSON] (2022) [PERSON], [PERSON], [PERSON], [PERSON], 2022: Mapping Forest Canopy Height at Large Scales using ICESat-2 and Landsat: An Ecological Zoning Random Forest Approach. _IEEE Transactions on Geoscience and Remote Sensing_.
* [PERSON] et al. (2022) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], L., [PERSON], J., [PERSON], [PERSON], 2022: Evaluation of ICESat-2 ATL03/08 Surface Heights in Urban Environments Using Airborne LiDAR Point Cloud Data. _IEEE Geoscience and Remote Sensing Letters_, 19.
* [PERSON] et al. (2022) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2022: Accuracy assessment of ICESat-2 ATL08 terrain estimates: A case study in Spain. _Journal of Central South University_, 29, 226-238.
|
isprs
|
ACCURACY EVALUATION OF ICESAT-2 ATL08 IN FINLAND
|
X. Wang, X. Liang
|
https://doi.org/10.5194/isprs-archives-xlviii-1-w2-2023-1817-2023
| 2,023
|
CC-BY
|
isprs/3f5a7d99_cfae_4180_8731_5c4d6294a7af.md
|
# Terrestrial Laser Scanning and Non Parametric Methods in Masony Arches Inspection
[PERSON] *
[PERSON]
Dep. Natural Resources Engineering and Environmental Engineering, University of Vigo, Campus Universitario As Lagoa -Marcosende s/n 36200 Vigo Spain - (parias, belenriveiro, julia, merchioslla)@uvigo.es
###### Abstract
Historical bridges are not only elements of our cultural heritage but also civil engineering structures. They are usually researched with destructive technologies, but their geometry is being more and more used in order to perform structural analysis, and this way it would be possible to make a diagnosis of their state of conservation. Laser scanners collect a great amount of data that allows building accurate 3D models that can be then used to make dimensional and structural analysis of these civil structures.
This paper presents a geometrical research carried out in the Roman Bridge of Segura (Spain). A 3D model of the bridge was built by means of a terrestrial laser scanner, and then its geometry was analyzed by two different methods. Firstly, by means of a direct way, a graphical analysis in CAD systems was performed and the main geometrical parameters were obtained and evaluated; secondly, using statistical nonparametric methods, developed for this kind of structures, it was possible to identify pathologies on the structure thanks to the measurement of deformations in vaults by means of a symmetrical study. The results of both methods are presented in this work, and then they are compared and discussed.
Arch Bridges, Laser Scanning, mesh processing, non parametric methods. 2010
Focusing and non parametric methods
## 1 Introduction
Historical Bridges are consolidated as key elements in order to facilitate the population movements and also the economical and cultural development of countries. This fact is present since the Roman period by means of the construction of Roman pathways where bridges played an important role as join elements. For that reason, they are the artefact designed by the engineering science in order to solve the obstacles existing in the Nature, having an important role as communication and transportation infrastructures. The constructive typology in the beginning of time was the masonry arch bridge. This way it was being consolidated as heritage legacy of the origin of the engineering discipline, with a special value in the ancient world and in particular in Europe and Spain.
Historical bridges normally have some vulnerabilities that require a special attention. The presence of heavy traffic, direct exposure to floods, seismic movements and also the possible defects in the construction of the bridge make that the civil engineering should pay attention in this kind of infrastructures. Thanks to the implication of several world organizations for the preservation and conservation of the cultural and historical heritage (UNESCO or ICOMOS, 2001) non destructive methodologies have been promoted for the documentation of historical monuments and also for the evaluation of their state of conservation.
Masony bridges are civil engineering constructions that usually have a complex geometry. This complexity makes the application of the measurement devices traditionally used in heritage documentation not feasible. Building peculiarities, location, structural behavior, etc., are factors that make feasible the employment of new image based techniques, which also allow the documentation of this kind of constructions without direct contact.
Unfortunately, many technicians currently involved in heritage conservation still work on the documentation of monuments in a rather traditional way. However, in the last years close range photogrammetry and laser scanning techniques have been applied to bridges inspection works, as well as other architectural and archaeological tasks. Some examples can be found in [PERSON] et al. (2007); [PERSON] et al. (2005) and [PERSON] et al. (2008).
The collection of 3D coordinates of millions of points over an object surface in few minutes represents a powerful tool to survey civil engineering structures, where the geometric precision and photorealistic details are also essentials. In the last years this technology has been proved in engineering structures, where measurement and monitoring of deformations are usually ejected ([PERSON] and [PERSON], 2008; [PERSON] et al., 2008, and [PERSON] and [PERSON], 2008). All the results reinforce the validity of the laser scanning technology as a useful tool in civil engineering structures. Furthermore, the \"time of flight\" (TOF) laser scanner offers the possibility of obtaining the radiometric information of the point clouds, making them to be optimal to survey heritage elements. In addition to this, the integration of 3D models created from laser data with Ground Penetrating Radar offers interesting tools in bridges analysis ([PERSON] et al., 2009).
Definitely, it is a fact that historical bridges should be researched by means of reliable methods; also, taking into account that in arch bridges the structural stability is, in essence, a function of their geometrical shape ([PERSON], 2006). Consequently, changes in the geometry comparing to theoriginal design can compromise their stability, so that an excessive deformation inevitably results in collapse.
This article presents a geometrical research carried out in the Roman Bridge of Segura (Spain). A 3D model of the bridge was built by means of a terrestrial laser scanning, and then its geometry was analyzed by two methods. Firstly, by means of a direct way, a graphical analysis in CAD systems was perform and the main geometrical parameters were obtained and evaluated; secondly, using statistical nonparametric methods, developed for this kind of structures, it was possible to identify pathologies on the structure thanks to the measurement of deformations in vaults by means of a symmetrical study.
## 2 The Segura Roman Bridge.
The survey presented in this paper was performed in the Roman Bridge of Segura, which is located in the frontier between Spain and Portugal, on the West of the Iberian Peninsula.
It crosses the river Eljas, a frontier river which is an affluent of the Tajo River, the longest Spanish watercourse. Over this bridge, a national road communicates the councils Piedras Albas (Province of Caceres, Spain) and Segura (Province of Castelo Branco, Portugal).
Many authors place the origin of this bridge in the Roman Period. Recent studies ([PERSON], 1996) reveal that medieval ashlars and several marks typical during this period are present in the building. This fact indicates that Segura Bridge only conserves the original Roman construction in the final arches, abutments, and pillars. The fabric of these structural elements is the typical ashlars with hewn rims.
According to [PERSON] (2005) the central arches of the bridge have been rebuilt in the second middle of the XVI century, after a collapse caused by an important growth of the river flow in 1565. This way, in the bridge reconstruction, the roman fabric and new masonry, imitating the roman hewn stones, were used.
In the reconstruction works the drainage area was increased making central arches higher. Consequently, the original horizontal grad line was converted in a great sloped platform. A posterior reformation in the bridge is documented by [PERSON] (2005), when the spandrel walls were elevated and the grad line was horizontal again.
Segura Bridge is one of the best Roman Bridges conserved in the Iberian Peninsula. It is also considered a replica of the nearby famous Alcantara Bridge, recognized as one of the best Roman Bridges built in the world by the Romans. It is made of five arches and four pillars with triangular cutwater on the upstream side.
## 3 3D Modeling of Segura Bridge
### Instrumentation
The point clouds acquisition was performed with a long range time of flight TLS Riegl LMS-Z390i. This scanner measures distances in a range of 1,5 to 400 meters. The nominal accuracy is 6 mm, 50 m in normal illumination and reflectivity conditions. It offers a great resolution with a minimum angle stepwidth of 0.002\({}^{\circ}\), being 0.2\({}^{\circ}\) the maximum. This instrument has a field of view of 360 sexaggical degrees in the horizontal plane and 80 degrees in the vertical plane and a rate of measurement between 8000 and 1000 points per second. The operations of recording point clouds with this RIEGL scanner are controlled with Rican PRO Software (Riegl\(\Omega\)).
A Nikon D200 digital camera, with a CCD sensor (Charged Coupled Device), DX format with a resolution of 10.2 million pixels was mounted on the laser in order to obtain the RGB information. The CCD sensor surpasses CMOS sensors in dynamic range (relation between saturation level and threshold for the signal 7 reception) and in terms of noise. The Nikon D200 camera is equipped with a matrix color measurement system 3D II (AE) with an exhibition sensor/color RGB of 1,005 pixels. The lens used for the measurement was a great angular Nikkor 20 mm f2.8D high frequency, digital angle with a view of 70 degrees.
A total station Leica TCR 1102 was employed in the topographic survey. This instrument is equipped with a laser distanciometer, with a range measurement of 80 meters in standard illumination conditions.
### Data acquisition
In order to obtain the geometry information of the whole bridge, several scan positions around the building were necessary. In total, seven positions were finally necessary, and the point clouds acquired from each station were then aligned thanks to the location of flat reflecting targets in different planes in the surroundings.
Once the scanner was positioned, the first task consisted of calibrating the camera mounting, this is to calibrate the camera sensor position regarding to the laser scanner coordinate system. That was made through the identification of seven common reflecting targets between the point clouds and the photographs taken with the camera, distributed around the whole field of view of both instr
Figure 1: Downstream view of Segura Bridge.
Figure 2: Different scan positions and field of view from each station around the bridge.
transformation matrix was calculated for the camera coordinate system with a residual distance mean less than 1 pixel.
The scanning procedure consisted of measuring two kinds of point clouds in each scan position. Firstly, an overview with low resolution (0.2\({}^{\circ}\)) was recorded; this information was used to create a digital terrain model of the bridge surroundings. Then, a detailed scan was captured in order to have an accurate geometry of the bridge. The scanner resolution was fixed in 0.02\({}^{\circ}\) (which implies around 1 cm separation between consecutive points on the bridge surface in a distance of 30 meters). The time consumption mean was around 15 minutes per detailed scan.
Having the bridge leveled with high precision is essential in order to analyze the vaults symmetry. For this reason, a topographic survey was performed, and the coordinates of the center of each target were measured with a total station. This task allowed us not only to level the bridge, but also to improve the registration of different scans.
### Point clouds processing
After the field works finished, all the acquired information was processed in Riscan Pro software. The first task was the registration (alignment) of the different point clouds, with a final error of 8 mm. Then all the points that did not belong to the bridge surface were manually deleted.
Finally, the global point cloud of the bridge was composed of 8.296.667 points. In order to obtain a regular density of points defining the surfaces, an octree filter [26] was applied to the global point cloud. This process reduced the density of the point cloud to 1259148 points (Figure 3).
The point cloud of the second vault was isolated in order to apply the non parametric algorithm to this structural element. A total of 32282 points defined the surface of the second vault.
### 3D Polygonal model
Once a regular density of the points of the bridge was obtained, the next step consisted of converting the point cloud to a 3D model based on surfaces. This operation was performed by means of a planar triangulation, based on the Delaunay triangulation. This procedure was executed in Riscan Pro software.
The next task in this modelling process consisted of adding the realistic texture to the model surface. For this task, the photographs acquired with the Nikon D200 camera were used. Then, orthophotos of the main planes of the structure were created to be used as a complement in the metrical analysis.
In order to analyze the 3D model of the bridge by means of CAD systems, it was necessary to create sections of the structure. In this sense, a total of 115 equidistant sections were created from the 3D model and exported to a CAD system.
## 4 Results
### CAD Analysis
The sections created in Riscan Pro were exported as dxf files to a CAD system. There the maps of the structure were outlined with the help of orthophotos. Figure 4 shows the orthophoto and the plane of the upstream wall of the bridge, where the arches' voussoirs were also drawn.
After the modelling process, the metrical exploitation consisted of the following sections:
- Arch ring thickness/span quotient
A summary of the main geometric parameters of bridge's elements is shown in Table 1. It is important to mention that in the nomenclature for bridges, arches are consecutively numbered from left to right from the upstream side.
The Segura Bridge is supporting a road, so it is complicated to know exactly where the beginning and the end of the bridge are, and consequently its total length. But, if we consider that the total length of the bridge is defined by the parquet length, the bridge has a measurement length of 87.5 meters.
Figure 4: Orthophoto and plane of the upstream wall.
Figure 3: Global point cloud of the Segura Bridge.
#### 4.1.1 Arches analysis
#### Typologies
Knowing the typology of an arch we can identify when the bridge was constructed, or restored ([PERSON], 2003). In this sense, semi-circular arches are typical from the Roman period, the grothic arches were the predominant arch typology during the medieval period, and elliptic or segmental arches date from later times. Based on these approaches, the arches of Segura Bridge should be round arches and consequently the quotient between the arch rise (R) and the theoretical radius of the circumference (Rc) should be 1. In Table 2 the quotient of each arch in the upstream wall is shown.
Results confirm that the restoration was executed in the XVI century ([PERSON], 2005). The first and the fifth arches conserve the Roman metrics, the three central arches were elevated but the rise of the arches was reduced regarding to the semi-span.
#### Symmetries
In order to analyze the symmetry of each arch, and consequently the possible geometric (structural) changes in relation to the original construction, a simple mathematical equation was set out: in a symmetric arch, the quotient between the left semi-span and the right semi-span is 1. If this proportion does not come true, the asymmetry percentage is calculated according to equation 1.
\[\%_{\text{\tiny{$\begin{array}{c}\text{\tiny{$\begin{array}{c}\text{ \tiny{$\begin{array}{c}\text{\tiny{$\begin{array}{c}\text{\tiny{$\begin{array}{c} \text{\tiny{$\begin{array}
#### 4.1.3 Walls
Internal irregularities in structures can cause an alteration in the drives equilibrium. Deformations on the wall surface normally appear associated to this fact. This way, after researching all the transversal sections along the 3D model of the bridge, a deformation in the downstream right abtunant was detected. This bend was quantified in the CAD software regarding to the vertical line of the wall in that section. Figure 6 illustrates the deformation measurement, which has an inclination of almost one degree.
### Analysis with nonparametric algorithm
The geometry of the second vault, where the percentage of asymmetry is close to 10%, is analyzed in detail by obtaining cross sections of the vault through a non parametric estimation based on local bivariate kernel smoothers. An important advantage of this approach is that it allows the estimation of the cross sections without preestablishing any parametric shape.
Binning is used as computational acceleration technique. Equidistant grid points in the 3D space are defined along the X, Y, Z axes. The simple binning consists in assigning a weight to each grid point equal to the number of observations in its bin. In the so-called linear binning this weight is the sum of inverse relative distances between the observations and the eight closest grid points. In this way we transform the original cloud of points sample in a binning cloud where the number of points is significantly lower.
Once the binning sample is built, local constant kernel estimators are used to obtain cross sections along the vault. The estimator assumes that the real surface is continuous and smooth enough so that it can be approximated locally by a constant. The non parametric estimator contains a smoothing parameter that determines the adjustment to the real shape. The mathematics of the estimator, as well as the methodology for the choice of the most appropriate smoothing bandwidth is described in [PERSON] et al (2008) and [PERSON] (1994).
Once the surface was obtained (using the optimal bandwidth), XZ cross-sections of the vault were obtained along the Y axis. The proposed algorithm allows the extraction of cross-sections for as many points from the reconstructed 3D model of the vault as are needed to precisely define the shape. Figure 5 shows four cross-sections obtained at y=1, 2,5, 4, 4.6 m, showing the two halves overlapped in order to compare them. Each section also includes the asymmetry curve obtained by the differences between the two semi-cross-sections, together with their 95% confidence intervals. Bootstrap methods were used to obtain these confidence intervals.
Figure 7 reveals significant differences between semi-cross-sections along the vault. The asymmetry is of about a few cm close to the keystone, but at the outbursts of the vault the differences reach levels close to one meter. The magnitude of asymmetry seems to indicate that the reconstruction of that arch should be performed on just one half of the vault instead of being a total reconstruction.
Figure 6: Deformation in the right downstream wall in a transversal section of the bridge.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline
**Pillar** & **Width** & **Arch** & **Span** & **Quotient** & \\ \hline & & 1 & 7,423 & 2,389 & S1/P1 \\ \hline
1 & 3,107 & & & 2,464 & S2/P1 \\ \hline & & 2 & 7,656 & 2,437 & S2/P2 \\ \hline
2 & 3,142 & & & 3,316 & S3/P2 \\ \hline & & 3 & 10,419 & 3,352 & S3/P3 \\ \hline
3 & 3,108 & & & 2,647 & S4/P3 \\ \hline & & 4 & 8,227 & 2,669 & S4/P4 \\ \hline
4 & 3,083 & & & 2,927 & S5/P4 \\ \hline & & 5 & 9,023 & & \\ \hline \end{tabular}
\end{table}
Table 5: Slenderness in Segura bridge.
Figure 7: Cross-sections. For each case, a XZ graph with both halves overlapping each other is shown (left column), together with the asymmetry graph plotted with 95% confidence intervals (right column).
The analysis of symmetry curves shown in Figure 5 reveals the presence of peaks that could correspond to displaced ashlars, in the curves for Y = 1 m and Y = 6 m. In general, it is useful to include the confidence intervals together with the difference graphs because they allow the easy detection of statistically significant asymmetries.
## 5 Conclusions
Based on the results showed in the present work, terrestrial laser scanning represents a great potential tool in bridges inspection based on geometric analysis. High accuracy combined with fast data acquisition involves the main advantages of this technology.
Thanks to the 3D model of the bridge and the dimensional analysis carried out in CAD systems, an accurate detection of pathologies was performed. In this sense, the asymmetry in arch 2 (almost 10%), combined with the skew of the first pillar indicates a possible seat of the support in pillar 1.
The algorithm used for the geometric analysis in vaults, together with the graphical representation of overlapping semi-cross-sections allows visual inspection and quantification of asymmetries and distortions, facilitating diagnosis based on the arch geometry. Furthermore, by working with (X,Y,Z) coordinates, the damage is always located and measured in the structure.
## Acknowledgments
This study was supported by research grants from the Spanish Ministry of Science and Innovation (Grants N\({}^{\text{e}}\) BIA2006-10259 and BIA2009-08012) and Xunta de Galicia (PGIDIT07 PXIB300191 PR).
## References
* [PERSON] et al. (2007) [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], [PERSON], [PERSON]. 2007. Digital Photogrammetry, GPR and computational analysis of structural damages in a mediaeval bridge. _Engineering Failure Analysis_ (14), p.1444-1457.
* [PERSON] et al. (2005) [PERSON], [PERSON], [PERSON], [PERSON] 2005 Documentation of Bridge Inspection Projects Using Virtual Reality Approach. _Journal of Infrastructures Systems_ 11(3), 172-179.
* [PERSON] (1994) [PERSON]. 1994. Fast Computation of Multivariate Kernel Estimators. _Journal of Computer Graphics Stat_ 3, p. 433-45.
* [PERSON] et al. (2008) [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], [PERSON] [PERSON]. 2008. From laser point clouds to surfaces: Statistical nonparametric methods for three-dimensional reconstruction. _Journal of Computer Aided Design_, 40, p. 646-652.
* [PERSON] and [PERSON] (1989) [PERSON]; [PERSON] y [PERSON], (1989) Puentes historicos de Galicia. Galicia, Colegio Oficial de Ingenieros de Caminos, Canales y Puertos, Xunta de Galicia.
* [PERSON] (2005) [PERSON], (2005) _La Construccion de Puentes Romanos en Hispania_. 2\({}^{\text{a}}\) Edicion. Santiago de Compostela, Xunta de Galicia.
* [PERSON] (2006) [PERSON] 2006. Escritos sobre la construccion cohesiva y su funcion en la arquitectura. Madrid: Instituto Juan de Herrera.
## References
* [PERSON] (1996) [PERSON] 1996. Puentes Romanos Peninsulaes: Tipologia y Construccion. In: _Actas del I Congreso Nacional de Historia de la Construccion_. Madrid, Spain.
* [PERSON] (2003) [PERSON] 2003. An Endeavour to Identify Roman Bridges Built in Former Hispania. In: _Actosbook of the first International Congress on Construction History_. Madrid, Spain. 20 pp.
* [PERSON] et al. (2007) [PERSON]; [PERSON]; [PERSON]; [PERSON]; [PERSON] [PERSON]; [PERSON]; [PERSON] and [PERSON], (2007) \"Documentation and evaluation of historic masonry arch bridges by means of geomatic techniques\". _Proceedings of the 5\({}^{th}\) International Conference on Arch Bridges_. Madeira, Portugal. pp. 373-380.
* [PERSON] et al. (2008) [PERSON]; [PERSON], [PERSON]; [PERSON]; [PERSON]; [PERSON]; [PERSON]; [PERSON]; [PERSON] 2008. Terrestrial Laser Scanning in Measurement of Structures. _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_. Beijing, China, Vol. XXXVII. Part B5. pp. 527-532.
* [PERSON] et al. (2007) [PERSON]; [PERSON] Terrestrial Laser Scannning for Deformation Monitoring of the Thermal Pipeline Traversed Subway Tunnel Engineering. _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_. Beijing, China, Vol. XXXVII. Part B5. pp. 491-494
* [PERSON] et al. (2008) [PERSON]; [PERSON]; [PERSON]; [PERSON] 2008. Multidisciplinar Approach to Historic Arch Bridges Documentation. _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_. Beijing, China, Vol. XXXVII. Part B5. pp. 247-252.
* [PERSON] et al. (2009) [PERSON]; [PERSON]; [PERSON]; [PERSON], [PERSON] 2009. GPR Surveying of Historical Bridges in Spain. _Proceedings of the 13\({}^{th}\) International Conference on Ground Penetrating Radar_. Granada, Spain. 5 pp.
* [PERSON] and [PERSON] (2004) [PERSON] and [PERSON]. 2004. Lidar Data Segmentation and Classification Based on Octree Structure. _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_. Istanbul, Turkey. Vol. XXXV. Part B3. 6 pp.
* [PERSON] and [PERSON] (2008) [PERSON]; [PERSON] 2008. Terrestrial Laser Scanning for Monitoring Load Tests on the Felsenau Vaiduct (CH). _The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences_. Beijing, China, Vol. XXXVII. Part B5. pp. 555-562.
|
isprs
|
Modelling masonry arches shape using terrestrial laser scanning data and nonparametric methods
|
J. Armesto, Javier Roca-Pardiñas, H. Lorenzo, P. Arias
|
https://doi.org/10.1016/j.engstruct.2009.11.007
| 2,010
|
CC-BY
|
isprs/dc9a4c02_fd5a_4317_a84c_ad191bcd0f92.md
|
# Cultural Heritage Digital Preservation through AI-Driven Robotics
[PERSON]
1 Istituto Italiano di Tecnologia, Center for Cultural Heritage Technology, 30172 Venice, Italy - (arianna.traviglia, riccardo.giovanelli)@iit.it
[PERSON]
1 Istituto Italiano di Tecnologia, Center for Cultural Heritage Technology, 30172 Venice, Italy - (arianna.traviglia, riccardo.giovanelli)@iit.it
[PERSON]
1 Istituto Italiano di Tecnologia, Center for Cultural Heritage Technology, 30172 Venice, Italy - (arianna.traviglia, riccardo.giovanelli)@iit.it
[PERSON]
1 Istituto Italiano di Tecnologia, Center for Cultural Heritage Technology, 30172 Venice, Italy - (arianna.traviglia, riccardo.giovanelli)@iit.it
[PERSON]
1 Istituto Italiano di Tecnologia, Center for Cultural Heritage Technology, 30172 Venice, Italy - (arianna.traviglia, riccardo.giovanelli)@iit.it
###### Abstract
This paper introduces a novel methodology developed for creating 3D models of archaeological artifacts that reduces the time and effort required by operators. The approach uses a simple vision system mounted on a robotic arm that follows a predetermined path around the object to be reconstructed. The robotic system captures different viewing angles of the object and assigns 3D coordinates corresponding to the robot's pose, allowing it to adjust the trajectory to accommodate objects of various shapes and sizes. The angular displacement between consecutive acquisitions can also be fine-tuned based on the desired final resolution. This flexible approach is suitable for different object sizes, textures, and levels of detail, making it ideal for both large volumes with low detail and small volumes with high detail. The recorded images and assigned coordinates are fed into a constrained implementation of the structure-from-motion (SfM) algorithm, which uses the scale-invariant features transform (SIFT) method to detect key points in each image. By utilising a priori knowledge of the coordinates and SIFT algorithm, low processing time can be ensured while maintaining high accuracy in the final reconstruction.
The use of a robotic system to acquire images at a pre-defined pace ensures high repeatability and consistency across different 3D reconstructions, eliminating operator errors in the workflow. This approach not only allows for comparisons between similar objects but also provides the ability to track structural changes of the same object over time.
Overall, the proposed methodology provides a significant improvement over current photogrammetry techniques by reducing the time and effort required to create 3D models while maintaining a high level of accuracy and repeatability.
## 1 Introduction
The use of 3D measurements and digital reconstruction in the domain of cultural heritage has become increasingly important in recent years due to technological advancements and greater accessibility to technologies that produce satisfactory results [16]. Digitally rendered artifacts play a significant role in safeguarding material culture, including small objects, grand architecture, and entire cultural heritage sites. This is evident in the increasing efforts to systematically digitise global cultural heritage. For instance, the European Commission has recently introduced the common European data space for cultural heritage, a \"new flagship initiative\" funded under the Digital Europe Programme (DIGITAL) aimed at accelerating the digital transformation of Europe's cultural sector and promoting the creation and reuse of content in the cultural and creative industries [15]. This initiative highlights the significance of generating and sharing high-quality digital data from cultural heritage entities in a collaborative and accessible way. It is funded by the Digital Europe Programme (DIGITAL) of the European Union.
Producing digital twins of cultural heritage entities, which are virtual counterparts of physical products, assets, or systems that reflect the elements and dynamics of the way the complex systems run and evolve over time [16, 17], is regarded as pressing and inevitable for purposes such as preservation, documentation, research, and public engagement.
With a vast amount of cultural heritage around the world, there is an urgent need to thoroughly digitise these entities, which necessitates developing methods to reduce the time required to record, process, reconstruct, and deliver accurate 3D reproductions. It, in turn, demands automation in every phase of the 3D modeling pipeline [14]. The ultimate goal of 3D digitisation in cultural heritage is to create accurate, detailed, and accessible digital twins of physical objects, assets, and systems. These digital twins can be used for a variety of purposes, including preservation, documentation, research, and public engagement.
One of the significant benefits of 3D digitisation in cultural heritage is the ability to preserve and protect physical objects and sites. Digital twins can function as a backup in case of damage or destruction of the original object, and they can also be used to study and understand the object without risking damage. Additionally, 3D digitisation can provide valuable information for conservation and restoration efforts, enabling experts to analyse the structure and condition of the object and identify potential issues that may arise in the future.
Another essential benefit of 3D digitisation in cultural heritage is the ability to make these objects and sites accessible to a broader audience [18]. Digital twins can be shared online or through virtual reality experiences, enabling people from all over the world to experience and learn about these objects andsites. This can be particularly valuable for objects and sites that are difficult to visit or are located in remote areas.
Furthermore, digital twins can play a crucial role in predictive maintenance of physical assets. By continuously monitoring the data from sensors installed on the asset, the digital twin can detect any anomalies and alert maintenance personnel before significant damage occurs ([PERSON] et al., 2023). This can help prevent costly downtime and repairs.
Overall, digital twins have the potential to revolutionise cultural heritage management and become real \"knowledge models\" ([PERSON], 2022). As technology continues to evolve and become more sophisticated, we can expect to see even more innovative applications in the future.
In this article, after providing a comprehensive summary of current developments in both the acquisition and resolution of 3D models, we present a novel approach to produce precise and high-fidelity 3D models of archaeological artifacts by utilising 3D data acquisition techniques in combination with a computer vision system mounted on a robotic arm that follows a predetermined path. This ground-breaking technique not only minimises the need for manual labour but also ensures exceptional levels of accuracy and precision in the resulting models.
## 2 3D data acquisition
### Acquisition methods
Various methods are used in the field of Cultural Heritage to acquire, elaborate, and store 3D models, including Structure-from-Motion (SfM), Structured Light Scanning, Laser Scanning, LiDAR, and others.
Structure-from-Motion (SfM) is a popular 3D reconstruction technique that recovers the 3D volume of an object from a series of images showing different views and recorded by one camera ([PERSON] and [PERSON], 1992). This methodology involves several steps, such as camera movement around the object, acquisition of multiple images, identification of features, matching, and assignment to a position in three-dimensional space. The processing time and the resolution of the reconstructed volume are proportional to the number of different views captured. The higher the number of captured profiles, the higher the level of detail and the processing time required to reconstruct the object in 3D. However, capturing a higher number of profiles also leads to longer digitisation time, which can be a constraint in reconstructing a large number of entities with the proper quality.
Structured Light Scanning is another 3D reconstruction technique that uses a projector and a camera to capture a series of patterns projected onto an object from different angles. The patterns create shadows on the object's surface, which are captured by the camera and used to reconstruct a 3D model. Structured Light Scanning has several advantages, including high accuracy, the ability to capture color information, and fast scanning speed. However, this method also has some drawbacks, including sensitivity to ambient light, limited range, and difficulty in capturing fine details ([PERSON] et al., 2019). Additionally, the equipment required for this technique can be expensive, and the setup process can be time-consuming.
Laser Scanning is a popular technique for 3D digital documentation and preservation of cultural heritage assets. The method involves a laser beam that is directed onto the object, which records the geometry and texture of the surface by measuring the time it takes for the laser to reflect back to the scanner. The resulting point cloud data can be used to create high-resolution 3D models of objects, buildings, and entire heritage sites, providing valuable information for research, conservation, and public engagement. Laser scanning can capture complex shapes and details that may be difficult to obtain with other techniques. However, laser scanning can be expensive, requires technical expertise to operate, and may not be suitable for objects that are sensitive to light or heat ([PERSON] et al., 2017).
LiDAR (Light Detection and Ranging) is another remote sensing technology widely used in cultural heritage applications. LiDAR sends pulses of light to the surface of an object and measures the time taken for the reflected signal to return, allowing the creation of high-resolution 3D point clouds of the object. LiDAR can quickly capture large areas and is particularly useful for outdoor archaeological sites and large structures. It is also capable of capturing details of inaccessible areas, such as the interior of caves or the tops of buildings. However, LiDAR can be expensive and requires specialised equipment, making it less accessible than other 3D scanning methods. Additionally, its accuracy can be affected by atmospheric conditions and vegetation, which can result in incomplete or distorted data.
Apart from the aforementioned methods, other techniques used for 3D digitisation in cultural heritage include photogrammetry, multi-view stereo (MVS), and time-of-flight (ToF) cameras.
Photogrammetry involves capturing multiple photographs of an object from different angles and using software to reconstruct a 3D model.
MVS is a technique similar to SfM that involves capturing images of an object from multiple viewpoints and using computer algorithms to reconstruct a 3D model.
ToF cameras are sensors that emit infrared light and measure the time it takes for the light to reflect back, providing depth information that can be used to create 3D models.
Each of these techniques has its advantages and disadvantages, and the choice of technique will depend on factors such as the size and complexity of the object, the desired level of detail, and the available resources.
In heritage studies and archaeological practice, SfM is still considered the standard due to the increasing quality levels reached by photo cameras and the lower costs ([PERSON] and [PERSON], 2016). Additionally, the superior quality of textures obtained through SfM technique, which is of utmost importance to archaeologists and heritage experts, make SfM an attractive option ([PERSON] et al., 2022). However, SfM has its limitations, such as the slow processing when compared to other methodologies, and the potential for operator inaccuracy in the process of capturing the needed pictures.
The use of advanced technologies like AI-powered robotics can automate the implementation process of some of these 3D digitisation techniques. Robotics has become increasingly necessary in various industries due to the demand for efficiency, accuracy, and cost-effectiveness. The field of Cultural Heritage has similar needs. Automation can significantly enhance the implementation process of the data acquisition technique by reducing the manual labor involved in capturing and processing data, which can be time-consuming and prone to human error. Moreover, automation can enable the processing of large amounts of data in a fraction of the time it would take manually,allowing for a quicker and more comprehensive analysis of cultural heritage assets.
Therefore, given the current state of the art in 3D modelling techniques and the need for faster and more accurate methods, the use of automation is an attractive option. By automating the acquisition techniques, we can minimise the limitations of manual effort and improve the quality and efficiency of 3D modelling processes.
To achieve this objective, we propose a workflow that integrates AI-powered robotics into the SfM technique. This workflow aims to automate the entire process of capturing images, identifying features, matching them, and assigning them to a position in three-dimensional space. By automating this process, we can achieve faster and more accurate results with minimal human intervention, thus increasing the productivity and efficiency of the 3D modelling process.
### Resolution
Resolution is a crucial factor for 3D reconstruction techniques. The quality of the 3D model improves as the resolution decreases, allowing for more precise and accurate reconstructions. The resolution is primarily determined by the algorithm used for the reconstruction and the input images. Previously, the resolution of a 3D model was determined by the object's diameter and the camera's spatial acquisition rate. However, advancements in computer vision and technology have made this relationship unreliable, producing only qualitative results. Despite this, the notion that a more refined observation angle leads to higher-quality images and lower final resolution persists. Therefore, an accurate measurement of a reconstructed 3D model's resolution may indicate the quality of the reconstruction itself ([PERSON] _et al._, 2020).
The algorithms to measure the resolution of 3D models can be grouped in two major categories ([PERSON] 2002):
* techniques based on the comparison of averaged subsets of the data, such as the Fourier Shell Correlation (FSC) ([PERSON] _et al._ 2005) or the Differential Phase Residual (DPR) ([PERSON] _et al._ 1981).
* algorithms based on the Fourier transform of individual images, such as the Q-factor ([PERSON] _et al._ 1985) and the spectral signal-to-noise ratio (SSNR) ([PERSON] _et al._ 1987).
The first group of algorithms has a significant advantage over the second, as it can measure the resolution in both 2D and 3D ([PERSON], 2002). The FSC is the dominant method used to measure resolution and has become the standard in recent years. FSC was proposed in 1987 by [PERSON] and [PERSON] and is the 3D extension of the Fourier Ring Correlation (FRC). FSC measures the cross-correlation between two 3D models in the Fourier space created from two subsets of the same dataset. This method compares equivalent regions of the two models based on frequency and determines the resolution as the frequency at which the FSC drops below a specific threshold. The threshold is conventionally kept at 0.143, derived from the correlation between a reconstructed density map and a perfect reference map.
However, the efficacy of FSC has been widely debated due to the structural limitations introduced by the FSC itself. The ratio behind splitting the dataset to create two different models inevitably biases the final resolution, and FSC produces only a global value that does not consider all the peculiarities of the reconstructed model. To overcome these problems, the ResMap algorithm was proposed ([PERSON] _et al._, 2013). ResMap detects the features of a model by fitting a 3D sinusoidal function in different points of the volume and saves the wavelength of the smallest sinusoid detectable above noise.
### A novel semi-automatised methodology
The 3D reconstruction methodology proposed in this paper is based on the Structure-from-Motion (SfM) technique, which reconstructs the 3D volume of an object from a series of 2D images captured from different views by a camera. Typically, SfM-based reconstruction methodologies identify feature points (also called keypoints) in the acquired images ([PERSON] _et al._, 2010) and match them across the entire series ([PERSON] _et al._, 2019). The matched keypoints are further refined to remove any outliers, a process commonly known as bundle adjustment. Thus, the position of the keypoints in different images is utilised to simultaneously compute the camera's pose and assign a set of 3D coordinates to each keypoint, resulting in a 3D sparse point cloud of the scene.
However, the quality of the reconstruction and the resolution of the reconstructed scene in SfM-based techniques are significantly affected by the number of views and the accuracy of the pose estimation. To overcome these limitations, an automated routine was designed to acquire a high number of images at a constant pace. A robotic arm UR3 equipped with an RGB camera mounted on its wrist was programmed to perform circular trajectories centred around the scene to reconstruct in 3D at different height values. The radius of these circumferences decreases with the height, creating a hemisphere around the object to be reconstructed. The number of circular trajectories and the acquisition rate define the Z resolution and the angular resolution, respectively. The robotic arm travels along circular trajectories, stopping to acquire images of the scene at a pace determined by the acquisition rate (Figure 1). The angular and Z resolution have a significant impact on the quality of the reconstructed 3D model. A higher acquisition rate results in a more resolved reconstruction. However, processing a large number of images can be computationally demanding, resulting in longer processing times and requiring more powerful workstations. Therefore, both resolution values need to be carefully selected, considering the details of the object to reconstruct, the desired final resolution, and the processing time.
Figure 1: Schematic representation of the 2D images acquisition system. A robotic arm UR3, equipped with a stereo camera, moves along circular trajectories of variable radius, drawing a hemisphere around the object to be reconstructed. The arm stops at a pre-defined pace, enabling the camera to acquire images. The acquisition points are labeled with black dots, corresponding to intersections between the circular trajectories (in violet), representing the Z resolution, and the acquisition rate (in comflower blue), which sets the angular resolution.
To determine the best values, an AI-based algorithm has been developed. The robot rotates around the center of the scene at a fixed distance, acquiring four images 90 degrees apart at a 45-degree angle from the horizontal plane. These images are combined to obtain a coarse 3D model to estimate the object's dimensions and center of mass, which are then used as input for the AI-based technique. This technique outputs the radius of the circumferences, and the angular and Z resolutions.
The ability to finely adjust both the Z and angular resolution values allows for a precise and dense acquisition of views, resulting in high-accuracy reconstruction of the scene. Additionally, these values can be easily modified to optimise the reconstruction for objects of different sizes. The pose of each view is determined by the pose of the tool center point, which is expressed as two sets of 3 coordinates to model its position and orientation, respectively (refer to Figure 4).
Another advantage provided by the robotic arm is its ability to move with millimetre precision, thus overcoming one of the main limitations of the SfM methodology. By providing highly accurate pose values and eliminating operator errors in the workflow, the poses of all views are well constrained, simplifying and improving the performance of the reconstruction algorithm. The poses obtained are first used to correct any camera distortion, and then to assign a set of 3D coordinates to the different images, generating a dense and refined point cloud output.
Figure 3: Schematic representation of the reconstruction process. The robotic arm equipped with the vision system rotates around the center of the scene (the red pedestal) following pre-defined circular trajectories and acquires a series of images. Figure 4: Schematic representation of the comparison between the conventional 3D reconstruction technique (**a**) and the one proposed in this paper (**b**). The proposed method uses a robot to rotate around the scene, which constrains and simplifies the methodology.
To evaluate the quality of the reconstruction, it is essential to measure the resolution of the 3D model. Therefore, the resulting 3D model obtained by the proposed methodology is processed using ResMap algorithm. This algorithm produces a local resolution map and associates a distribution of values to the resolution of the density map. A precise local estimation of the resolution enables structural analysis, setting a limit to the significant elements in the 3D reconstruction. Furthermore, the robustness of this methodology to noise helps avoid confusing high-quality results with high-frequency noise, as the latter may be visually appealing but can deteriorate the information ([PERSON] _et al._, 2015).
Figure 4 illustrates a comparison between the traditional methodology(a) and the one proposed in this paper (b), which eliminates the need for bundle adjustment and generates directly a dense point cloud. This approach not only improves efficiency but also reduces computational overhead, resulting in faster reconstruction times during the computation stage.
## 3 Conclusions
Acquiring images and transforming them into 3D models is a complex process that requires careful consideration of various factors. One of the most critical factors is the acquisition rate of the images, which can significantly impact the accuracy and consistency of the final reconstruction. Conventional methodologies often involve capturing images in an unordered sequence, which can lead to variations in the reconstructed 3D models, even when using the same image sequence.
To address this issue, our proposed method utilises a robotic arm to standardise the image acquisition process. The motion of the robotic arm is programmed to move in an optimised manner, capturing images at specific intervals by avoiding redundant information, high computational times, and ensuring high-quality results. This approach offers several advantages over traditional methodologies. For one, it increases the repeatability and consistency of the reconstructions, as well as improving their accuracy and reliability.
A key advantage of using a robotic arm for image acquisition is that it enables us to constrain the acquisition rate. By capturing images at a consistent pace, we can obtain a more accurate representation of the scene, even when there are small variations due to degradation or damage. Because the images are captured at a consistent interval, the impact of any changes in the scene is minimised, resulting in a more accurate representation of the scene.
Another advantage of our proposed method is that it provides a more reliable basis for comparing different 3D models. With highly consistent acquisition rates, we can be confident that any differences between the models are due to actual changes in the scene, rather than variations in the image sequence or reconstruction process. This increases the accuracy of the final model and makes it easier to identify any changes or differences between the models.
Our proposed method offers promising opportunities for the future development of 4D applications in cultural heritage preservation. By adding time as the front dimension, we can track changes over time, which is crucial for the conservation and preservation of material culture, and enables the creation of more advanced Digital Twins beyond simple 3D scans. Monitoring structural variations affecting the reconstructed volumes of the scene through time enhances the precision of monitoring these transformations. This has become an essential component of conservational maintenance for archaeological artifacts, enabling better-informed decisions about how to protect and preserve these valuable pieces of our collective cultural heritage.
Using our method, mistakes made during the evaluation of modifications are minimised, providing more reliable distinctions in conditions of damage or aging. This is particularly important in the field of cultural heritage preservation, where accurate and reliable data is crucial for making informed decisions about how to conserve and protect these valuable artifacts.
## References
* [PERSON] and [PERSON] (2016) [PERSON], [PERSON], 2016. Structure from motion (SFM) photogrammetry vs terrestrial laser scanning. In: [PERSON], [PERSON] (eds), _Geoscience Handbook 2016: AGI Data Sheets_, 5 th ed. Alexandria, VA: American Geosciences Institute, Section 20.1.
* [PERSON] and [PERSON] (2022) [PERSON], [PERSON], 2022. Application of the digital twin concept in cultural heritage. In: [PERSON], [PERSON], [PERSON] (eds.) _Proceedings of the 1\({}^{\text{\text{\textregistered}}}\) International Virtual Conference on Visual Pattern Extraction and Recognition for Cultural Heritage Understanding, 12 September 2022_.
* European Foundation (2023) European Foundation 2023. Common European data space for cultural heritage, accessed 28 April 2023, [[https://pro.europeana.eu/page/common-european-data-space-for-cultural-heritage](https://pro.europeana.eu/page/common-european-data-space-for-cultural-heritage)]([https://pro.europeana.eu/page/common-european-data-space-for-cultural-heritage](https://pro.europeana.eu/page/common-european-data-space-for-cultural-heritage))
* [PERSON], [PERSON], and [PERSON] (1981) [PERSON], [PERSON], [PERSON] [PERSON], 1981. Computer averaging of electron micrographs of 40s ribosomal subunits. _Science_, 214(4527), 1353-1355.
* [PERSON] (2022) [PERSON] 2022. Digital Twin: a new perspective for cultural heritage management and fruition. _Acta IMEKO_ 11(1).
* [PERSON] _et al._ (2019) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2019. Structure from Motion Photogrammetry in Forestry: a Review. _Current Forestry Reports_ 5, 155-168:doi.org/10.1007/s40725-019-00094-3
* [PERSON] _et al._ (2022) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2022. A proposal of a new automated method for SfM/MVS 3D reconstruction through comparison of 3D data by SfM/MVS and handheld laser scanners. _PLoS One_, 17(7): e0270660. doi.org/10.371/journal.pone.0270660
* [PERSON], [PERSON], and [PERSON] (1985) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 1985. The structure of the stalk surface layer of a brine pond microorganism: correlation averaging applied to a double layered lattice structure. _Journal of Microscopy_, 139(1):63-74.
* International Archives of the Photogrammetry, Remote Sensing and Spatial Information Science, XLII/2W3, 377-384_. doi:/10.5194/isprs-archives-XLII-2W3-377-2017
* [PERSON], [PERSON], and [PERSON] (2013) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] 2013. Quantifying the local resolution of cryo-EM density maps. _Nature Methods_, 11(1):63-65.
* [PERSON] and [PERSON] (2013)* [PERSON] et al. (2023) [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], 2023. Digital Twins and Enabling Technologies in Museums and Cultural Heritage: An Overview. _Sensors_ 23(3): 1583. Doi.org/10.3390/s23031583
* [PERSON] (2002) [PERSON], 2002. Three-dimensional spectral signal-to-noise ratio for a class of reconstruction algorithms. _Journal of Structural Biology_, 138(1-2):34-46.
* [PERSON] et al. (2022) [PERSON], [PERSON], [PERSON], 2022. Accurate 3D models in both geometry and texture: An archaeological application. _Digital Applications in Archaeology and Cultural Heritage_, 27. doi.org/10.1016/j.daach.2022.e00248
* [PERSON] et al. (2019) [PERSON], [PERSON], [PERSON], [PERSON], 2019. Sources of errors in structured light 3D scanners. _Proceedings SPIE 10991, Dimensional Optical Metrology and Inspection for Practical Applications VIII_. doi.org/10.1117/12.2518126
* [PERSON] (2011) [PERSON], 2011. Heritage Recording and 3D Modeling with Photogrammetry and 3D Scanning. _Remote Sens._, 3, 1104-1138. Doi.org/10.3390/rs3061104
* [PERSON] et al. (2015) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2015. How does image noise affect actual and predicted human gaze allocation in assessing image quality? _Vision research_ 112: 11-25.
* [PERSON] and [PERSON] (1992) [PERSON], [PERSON], 1992. Shape and motion from image streams under orthography: a factorization method. _International journal of computer vision_, 9:2, 137-154. doi.org/10.1073/pnas.90.21.9795
* [PERSON] (2022) [PERSON], 2022. Museums and Digital Technologies: To what extent can digitization of museums collections help access, promote and preserve cultural heritage? Case studies on Mauritshuis and Kunstmuseum, The Hague.
* [PERSON] et al. (1987) [PERSON], [PERSON], [PERSON], [PERSON], 1987. A new resolution criterion based on spectral signal-to-noise ratios. _Ultramicroscopy_, 23(1):39-51.
* [PERSON] et al. (2000) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2000. Single-particle electron cryo-microscopy: towards atomic resolution. _Quarterly Reviews of Biophysics_, 33(4):307-369.
* [PERSON] and [PERSON] (2005) [PERSON], [PERSON], [PERSON], [PERSON], 2005. Fourier shell correlation threshold criteria. _Journal of Structural Biology_, 151(3):250-262.
* [PERSON] et al. (2010) [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON] 2010. Detailed analysis and evaluation of keypoint extraction methods. _2010 International Conference on Computer Application and System Modeling (ICCASM 2010)_. Vol. 2. IEEE.
* [PERSON] and [PERSON] (2009) [PERSON], [PERSON], [PERSON], [PERSON], 2009. Fast normalized cross-correlation. _Circuits, Systems and Signal Processing_, 28(6):819-843.
|
isprs
|
CULTURAL HERITAGE DIGITAL PRESERVATION THROUGH AI-DRIVEN ROBOTICS
|
G. Marchello, R. Giovanelli, E. Fontana, F. Cannella, A. Traviglia
|
https://doi.org/10.5194/isprs-archives-xlviii-m-2-2023-995-2023
| 2,023
|
CC-BY
|
isprs/f418b19b_a32e_4bee_9189_98eee3ff6b2e.md
|
Comparing Independent Component Analysis with Principle Component Analysis in Detecting Alterations of Porphyy Copper Deposit (Case Study: Ardestan Area, Central Iran)
[PERSON]*, [PERSON], [PERSON]
Dept. of Geomatics Engineering, Shahid Rajaee Teacher Training University
([PERSON], [PERSON], [PERSON])@strttu.edu
###### Abstract
The image processing techniques in transform domain are employed as analysis tools for enhancing the detection of mineral deposits. The process of decomposing the image into important components increases the probability of mineral extraction. In this study, the performance of Principal Component Analysis (PCA) and Independent Component Analysis (ICA) has been evaluated for the visible and near-infrared (VNIR) and Shortwave infrared (SWIR) subsystems of ASTER data. Ardestan is located in part of Central Iranian Volcanic Belt that hosts many well-known porphyy copper deposits. This research investigated the porphyitic and argillic alteration zones and outer mineralogy zone in part of Ardestan region. The two mentioned approaches were applied to discriminate alteration zones from igneous bedrock using the major absorption of indicator minerals from alteration and mineralogy zones in spectral rang of ASTER bands. Specialized PC components (PC2, PC3 and PC6) were used to identify pyrite and argillic and propovitic zones that distinguish from igneous bedrock in RGB color composite image. Due to the eigenvalues, the components 2, 3 and 6 account for 4.26% 0.9% and 0.09% of the total variance of the data for Ardestan scene, respectively. For the purpose of discriminating the alteration and mineralogy zones of porphyy copper deposit from bedrocks, those mentioned percentages of data in ICA independent components of IC2, IC3 and IC6 are more accurately separated than noisy bands of PCA. The results of ICA method conform to location of lithological units of Ardestan region, as well.
Independent Component Analysis, ASTER, Alteration, Copper Deposit, Ardestan
## 1 Introduction
The porphyy copper deposit model is surrounded by multiple zones of mineralogy (the interior, middle and outer zones). Each of mineralogy zones is found with one of the alteration zones. The outer and middle mineralogy zones have propovitic and argillic-altered rocks, respectively ([PERSON] and [PERSON], 1989). Figure 1 displays the alteration zones of porphyy copper deposits. Different rock compositions cause the variations of outer porphyitic zone in the world, but epidode, chlorite, and carbonate minerals are common constituents. Middle argillic zone can be indicated by kaolinite, muscovite and alunite mineral ([PERSON] and [PERSON], 2006).
Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) data are expected to be useful in geological mapping and mineral resources exploration ([PERSON] et al., 2002; [PERSON] et al., 2016; [PERSON] and [PERSON], 2011; [PERSON] et al., 2015). This sensor offers multispectral coverage at high spatial resolution for geological applications due to the distinct and significant spectral features in shortwave infrared (SWIR) and thermal infrared (TIR) regions ([PERSON] et al., 2016; [PERSON] and [PERSON], 2006; [PERSON] and [PERSON], 2011; [PERSON] et al., 2015).
The results of studies on image processing techniques have illustrated that transforming the original image bands into another framework is useful in depicting the target features and special feature extraction in transform domain that can be more clear than in spatial domain ([PERSON], 2011). Most transformations used in image processing decompose the images into important local components, i.e. unlocking the basic image structure and finding the important components for dimensionality reduction; Hence, the choice of the transformation is very important ([PERSON], 2011). Principal Component Analysis (PCA) can identify uncorrelated vector bases ([PERSON], 2011). PCA finds a linear transformation with least squares which follows the normalized orthogonal eigenvector of the data covariance matrix. While PCA uses only second order statistics, Independent Component Analysis (ICA) looks for components that are statistically independent rather than uncorrelated; thus, it requires statistics of orders higher
Figure 1: Alteration zones in monzonite porphyy copper system model ([PERSON] and [PERSON], 1989; [PERSON], 1973)
than the second ([PERSON] et al., 2000). It can be concluded that Independent Component Analysis is an extension of PCA technique.
Nowadays; advanced remote sensing systems use multispectral remote sensors and the image processing in geological investigations. Studies in Sarocheshmeh and Meiduk on ASTER data showed that the image processing techniques such as PCA method have appropriate results to identify the iron oxides and vegetation in the VNIR subsystem and hydrothermal alteration mineral zones associated with porphyby copper deposit in SWIR subsystem. Location of the new prospects with the optimum results can be identified by the image processing techniques in these areas ([PERSON] et al., 2010; [PERSON] and [PERSON], 2011). As another sample, study in southern Masule, Iran using ETM+ image showed that applying ICA transformation led the sampling program in finding unknown lithology and dikes ([PERSON] et al., 2012).
Mapping spatial distribution of propylitic and argillic zones and distinguishing them from bedrocks is quite interesting to produce surface section of the study area with the possibility of exploring copper deposit. Accordingly, in this study, PCA and ICA method (extended PCA) are tested on ASTER data to investigate the Surface signs of porphyby copper deposit in the northeast of Esfahan province, the central part of Ardestan. Finally, the performance of the proposed method is verified in detecting the salient features in region bedrock.
## 2 Materials and Methods
### Study Area
Many well-known porphyby copper deposits are discovered in NW-SE trending Central Iranian Volcanic Belt in Iran ([PERSON] and [PERSON], 2011). The study area of Ardestan, located in 52\({}^{o}\) 08' 24.00\({}^{o}\) to 52\({}^{o}\) 17' 24.00\({}^{o}\) Easting, 33\({}^{o}\) 12' 36.00\({}^{o}\) to 33\({}^{o}\) 20\({}^{o}\) 24.00\({}^{o}\) Northing, mostly consist of volcanic pyrocalcis rocks related to the Eocene and Oligocene geologic epoch in a geologic period of Paleogene. Intrusions of diorite and monozonite located in Dorojin mountain and Marbin village, are most important intrusion in the study area. Most outcrops belong to volcanic activity of post-Eocene and pre-Eocene era in the study area. These outcrops are a combination of calculakaline and contains ofhylotic, datic, andastic and basaltic anddesite minerals. The geologic map of the region in Figure 2, illustrates the location of study area, rock units, and general structural geology.
### Specification of ASTER Data
The value of ASTER data has been demonstrated for enhanced geological mapping in many types of research ([PERSON] et al., 2002; [PERSON] et al., 2016; [PERSON] et al., 2010; [PERSON] and [PERSON], 2003). Level 1B Aster data, used in this study, are calibrated and resampled so as to correct for radiometric and geometric errors. This level of image data is usually generated in UTM projection system in swath orientation using bi-cubic convolution resampling ([PERSON] et al., 2002).
The Aster image of the study area had been already pre-processed and geo-referenced to UTM zone 39N projection with the WGS-84 datum. Crosstalk effect causes the deviations from correct reflectance in false absorption features and distortion of diagnostic signatures in 4, 5 and 9 bands detectors on the shortwave infrared subsystem. As a result, image processing leads to misidentifying minerals ([PERSON] and [PERSON], 2012). In the current research, Crosstalk correction was applied on ASTER SWIR bands and Internal Average Relative Reflection (IARR) method was used for atmospheric correction of ASTER data. This method is a preferred calibration technique that is recommended for mineralogical mapping because it does not require the prior knowledge of samples collected from the field ([PERSON] and [PERSON], 2011).
### Principal Component Analysis (PCA)
Linear generative model is used in Principal Component Analysis (PCA) to identify uncorrelated vector bases. These bases are applied to compress and optimize image data ([PERSON] and [PERSON], 2011; [PERSON], 2011). This procedure is based on this assumption that the data structure can be described by multi-dimensional normal distribution. Although PCA can
Figure 2: Geological Map of Igneous Rocks in Ardestan area (with RGB color composite of 1, 2 and 3 ASTER bands).
identify uncorrelated vector bases, their basis vectors are statistically dependent ([PERSON], 2014). Statistical analysis of PCA is a useful method for lithological study and mapping the spatial distribution of specific materials based on their spectral properties in the VNIR and SWIR bands ([PERSON] and [PERSON], 2011).
### Independent Component Analysis (ICA)
One of the criteria that makes ICA different from other transformation techniques is this assumption that the basis vectors or equivalently the transform coefficients are statistically independent. ICA can identify statistically independent basis vectors in a linear generative model ([PERSON] et al., 2001). This transform is based on the non-Gaussian assumption of the independent sources, and uses higher-order statistics to reveal desired features. ICA transformation can distinguish features of interest even when they occupy only a small portion of the pixels in the image that may be buried in the noisy bands of PC rotation during data whitening ([PERSON] and [PERSON], 2000; [PERSON], 2011).
## 3 Statistical Analysis Methods
Some of the Porphy copper deposits related to tectonic regions are formed in monzonite and porphyy granodiorite rocks. Alternation zones of potasica, porphyitic, agillic and sericitic are founded in monzonite rock ([PERSON] and [PERSON], 1989). In this study, the alteration zones of agillic and propolitic are examined. A PC image with the greatest difference in eigenvalues obtaining from diagnostic reflective and absorptive bands of the mineral is used to select the perfect PC image. The positive eigenvalue in the reflective band of mineral from PC band caused the mineral bearing pixels become bright while other pixels will be dark if the eigenvalue for enhanced target mineral is negative. Accordingly, the eigenvector of each PC band would help to identify the perfect PC image which has spectral information associated with target mineral.
PCA produced PC images together with a table of statistical factors. This table contains covariance matrix on all nine reflective bands of ASTER image with eigenvectors and eigenvalues of image bands obtained from PCA. Table 1 presents statistical information to investigate PC image components. Selecting the appropriate data is carried out based on laboratory spectra of muscovite, kaolinite, alunite, calcite, epidotte, chlorite and pyrite resampled to ASTER VNIR and SWIR bands. In this study, laboratory spectra of pyrite were checked, since it is the most important mineral in outer and middle zones of mineralogy for copper deposits ([PERSON] and [PERSON], 1989).
In Figure 3 it is evident that low and high reflectance of pyrite minerals occur in VNIR and SWIR region respectively, so pyrite can be identified as bright pixels in PC2 (Figure4).
Figure 4: PC2 image for the Ardestan scene; ellipsoidal polygons separate bright pixels as probable locations of the pyrite mineral.
Figure 3: Laboratory spectra of pyrite resampled to aster bandpasses. Pyrite has 1.165 and 1.10 μm absorption features (Produced by the author, after [PERSON] et al., 2007).
\begin{table}
\begin{tabular}{c|c c c c c c c c c c} \hline \hline Eigenvector & Band 1 & Band 2 & Band 3 & Band 4 & Band 5 & Band 6 & Band 7 & Band 8 & Band 9 & Eigen values (\%) \\ \hline Band1 & -0.25 & -0.24 & -0.24 & -0.37 & -0.36 & -0.38 & -0.36 & -0.38 & -0.36 & 93.72 \\ Band2 & -0.55 & -0.50 & -0.52 & 0.09 & 0.17 & 0.18 & 0.16 & 0.22 & 0.21 & 4.62 \\ Band3 & 0.35 & 0.26 & -0.51 & -0.60 & -0.06 & -0.10 & 0.12 & 0.32 & 0.24 & 0.9 \\ Band4 & 0.35 & 0.12 & -0.56 & 0.31 & 0.25 & 0.28 & -0.16 & -0.53 & -0.09 & 0.32 \\ Band5 & -0.24 & 0.07 & 0.24 & -0.42 & 0.26 & 0.20 & -0.20 & -0.45 & 0.59 & 0.2 \\ Band6 & 0.16 & -0.32 & 0.15 & -0.38 & 0.28 & 0.15 & 0.62 & -0.25 & -0.39 & 0.09 \\ Band7 & 0.52 & -0.66 & 0.11 & 0.02 & -0.34 & 0.18 & -0.18 & 0.02 & 0.31 & 0.07 \\ Band8 & 0.10 & -0.09 & -0.06 & 0.25 & 0.07 & -0.74 & 0.37 & -0.28 & 0.39 & 0.06 \\ Band9 & -0.18 & 0.23 & -0.07 & 0.07 & -0.71 & 0.32 & 0.46 & -0.27 & 0.14 & 0.04 \\ \hline \hline \end{tabular}
\end{table}
Table 1: PCA eigenvector matrix of 9 bands of ASTER data in VNIR and SWIR rang for Ardestan scene
ICA has produced IC images with statistic factors similar to the PCA method. However, independent components can help to distinguish argillic and propoliticic zones from igneous bedrocks. Figure 10 is an RGB color composite of IC2, IC3, and IC6 images.
A geological report of the region made by ZAPCE company was used to validate the results of this study. According to the aforementioned report, detected promising areas by means of remote sensing technique, match precisely with the location of mineralized zones of the region (Figure 11-B).
## 4 Assessment and Conclusion
This study used ASTER data with the aim of detecting alterations that help mineral exploration related to porphyy copper deposits. Investigating the ASTER VNIR and SWIR bands and image processing techniques in the transform domain to separate alteration mineral from bedrock as well as analysis of the formation of alteration zones, can be an efficient tool for more accurate identification of mineral position.
Principal component analysis and independent component analysis transformation techniques carried out for alteration zones mapping, show appropriate results in studying spectral features of minerals related to porphyy copper deposit in Ardestan region. Analyzing the laboratory spectral signatures of minerals was used to identify the major reflective and absorptive bands and select the PC band from eigenvector of PCA method. PC2, PC3, and PC6 were found useful for detecting pyrite, argillic and propoliticic alterations respectively. ICA method has eigenvector statistics similar to those of PCA method. Its components are independent; thus it can be assumed as an applicable and efficient method for extraction of mineral alteration from igneous lithological units. RGB color composite IC2, IC3 and IC6 detected the pyrite, argillic and propoliticic zones from andesitic volcanic bedrock in the study area. In addition to exploiting of a more detailed extraction procedure of linear structures, it is recommended that a study is performed on the role of topography in ICA to discover probable connections between the independent components. This method may contribute significantly to separate the outer mineralogy zones from mineral of alteration zones which can be useful from an economic point of view in identifying porphyy copper deposits.
Figure 10: A) RGB color composite of IC2, IC3 and IC6 images. The identified sprites are related to argillic and propoliticic as major minerals in mineralogy zones for copper deposits. B) Argillic alteration is revealed in margin bedrock. The propoliticic alterations are surrounded by meta monozoditorite in sedimentary bedrock. There are faults in this area, causing lava to appear at the surface. The pyrite in diorite-monozoditorite intrusive rock is surrounded by argillic alteration, so copper is likely to increase in this position.
Figure 9: RGB color composite of PC2, PC3 and PC6 images shows how argillic and propoliticic alteration zones are distributed in the area. Identified sprites are related to argillic and propoliticic as major minerals in copper deposits zones.
## Acknowledgements
This study was conducted as a part of the M.Sc. thesis at the Department of Geomatics Engineering, Faculty of Civil Engineering in Shahid Rajeee Tacher Training University. The authors are thankful to Mr. [PERSON] in ZAPC company and Mr. [PERSON] in Geological Survey and Mineral Exploration of Iran (GSI) for providing geological data of the study area required for this investigation.
## Reference
[PERSON], [PERSON], [PERSON], 2002. ASTER User handbook, version 2. Jet propulsion laboratory 4800, 135.
[PERSON], [PERSON], [PERSON], 2016. ASTER spectral analysis for alteration minerals associated with gold mineralization. One Geology Reviews 75, 239-251.
[PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 1993. The US Geological Survey, Digital Spectral Library: Version 1 (0.2 to 3.0 um). Geological Survey (US).
[PERSON], [PERSON], [PERSON], 2014. Data fusion in remote sensing, description and methods. Tehran university, Tehran, pp 51-68 (In Persian).
[PERSON] [PERSON], [PERSON] [PERSON], [PERSON] [PERSON], 2012. Assessing the performance of independent component analysis in remote sensing data processing. Journal of the Indian Society of Remote Sensing 40, 577-588.
[PERSON], [PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2010. Characterization of ASTER data for mineral exploration.
[PERSON], [PERSON], [PERSON] [PERSON], [PERSON], [PERSON], 2001. Independent Component Analysis.
[PERSON], [PERSON], [PERSON], [PERSON], 2000. Independent component analysis: algorithms and applications. Neural networks 13, 411-430.
[PERSON], [PERSON], [PERSON] [PERSON], 1989. Applied economic geology. Javid Publication, Mashhad, Iran, pp. 181-190 (In Persian).
[PERSON], [PERSON], [PERSON], [PERSON], [PERSON] [PERSON], 2000. A unifying information-theoretic framework for independent component analysis. Computers & Mathematics with Applications, 39(11), pp. 1-21.
[PERSON], [PERSON], [PERSON], L.C., 2006. Regional mapping of phyllinc- and argillic-altered rocks in the Zagros magmatic arc, Iran, using Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) data and logical operator algorithms. Geosphere, 2(3), pp. 161-186.
[PERSON], [PERSON], [PERSON], [PERSON], 2011. Identification of hydrothermal alteration minerals for exploring of porphyry copper deposit using ASTER data, SE Iran. Journal of Asian Earth Sciences, 42(6), pp. 1309-1323.
[PERSON], [PERSON], [PERSON], [PERSON], 2012. The application of ASTER remote sensing data to porphyry copper and epithermal gold deposits. Ore Geology Reviews, 44, pp. 1-9.
[PERSON], [PERSON], [PERSON], [PERSON], [PERSON], 2015. Chromitite Prospecting Using Landsat TM and Aster Remote Sensing Data. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 2(2), pp. 99.
[PERSON], Mars, J.C., 2003. Lithologic mapping in the Mountain Pass, California area using advanced spaceborne thermal emission and reflection radiometer (ASTER) data. Remote sensing of Environment, 84(3), pp. 350-366.
[PERSON], 1973. The tops and bottoms of porphyry copper deposits. Economic Geology, 68(6), pp. 799-815.
[PERSON], 2011. Image fusion: algorithms and applications. Academic Press, pp. 85-116.
Figure 11: Geological validation of remote sensing results in localization of mineral zones. A) Field observation path on RGB color composite of IC 2, 3 and 6 image. B) Location of alteration zones revealed in PCA method.
|
isprs
|
COMPARING INDEPENDENT COMPONENT ANALYSIS WITH PRINCIPLE COMPONENT ANALYSIS IN DETECTING ALTERATIONS OF PORPHYRY COPPER DEPOSIT (CASE STUDY: ARDESTAN AREA, CENTRAL IRAN)
|
S. Mahmoudishadi, A. Malian, F. Hosseinali
|
https://doi.org/10.5194/isprs-archives-xlii-4-w4-161-2017
| 2,017
|
CC-BY
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.