Update README.md
Browse files
README.md
CHANGED
|
@@ -71,17 +71,22 @@ with very-high-definition aerial images from the ([BD ORTHO®](https://geoservic
|
|
| 71 |
|
| 72 |
Consequently, the model's prediction would improve for aerial lidar point clouds with similar densities and colorimetries than the original ones.
|
| 73 |
|
| 74 |
-
**
|
| 75 |
-
Point clouds were preprocessed for training with point subsampling, filtering of artefacts points, on-the-fly creation of colorimetric features, and normalization of features and coordinates.
|
| 76 |
For inference, the same preprocessing should be used (refer to the inference configuration and to the code repository).
|
| 77 |
|
| 78 |
-
**
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
Patched inference from large point clouds (e.g. 1 x 1 km Lidar HD tiles) is supported, with or without (default) overlapping sliding windows.
|
| 82 |
The original point cloud is augmented with several dimensions: a PredictedClassification dimension, an entropy dimension, and (optionnaly) class probability dimensions (e.g. building, ground...).
|
| 83 |
Refer to Myria3D's documentation for custom settings.
|
| 84 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 85 |
## Bias, Risks, Limitations and Recommendations
|
| 86 |
|
| 87 |
---
|
|
|
|
| 71 |
|
| 72 |
Consequently, the model's prediction would improve for aerial lidar point clouds with similar densities and colorimetries than the original ones.
|
| 73 |
|
| 74 |
+
**_Data preprocessing_**: Point clouds were preprocessed for training with point subsampling, filtering of artefacts points, on-the-fly creation of colorimetric features, and normalization of features and coordinates.
|
|
|
|
| 75 |
For inference, the same preprocessing should be used (refer to the inference configuration and to the code repository).
|
| 76 |
|
| 77 |
+
**_Inference library: Myria3D_**: Model was trained in an open source deep learning code reposiroty developped in-house, and inference is only supported in this library.
|
| 78 |
+
Myria3D comes with a Dockerfile as well as detailed documentation for inference.
|
| 79 |
+
Patched inference from large point clouds (e.g. 1 x 1 km Lidar HD tiles) is supported, with or without (by default) overlapping sliding windows.
|
|
|
|
| 80 |
The original point cloud is augmented with several dimensions: a PredictedClassification dimension, an entropy dimension, and (optionnaly) class probability dimensions (e.g. building, ground...).
|
| 81 |
Refer to Myria3D's documentation for custom settings.
|
| 82 |
|
| 83 |
+
**_Multi-domain model_**: The FRACTAL dataset used for training covers 5 spatial domains from 5 southern regions of metropolitan France.
|
| 84 |
+
The 250 km² of data in FRACTAL were sampled from an original 17440 km² area, and cover a wide diversity of landscapes and scenes.
|
| 85 |
+
While large and diverse, this data only covers a fraction of the French territory, and the model should be used with adequate verifications when applied to new domains.
|
| 86 |
+
This being said, while domain shifts are frequent for aerial imageries due to different acquisition conditions and downstream data processing,
|
| 87 |
+
the aerial lidar point clouds are expected to have more consistent characteristiques
|
| 88 |
+
(density, range of acquisition angle, etc.) across spatial domains.
|
| 89 |
+
|
| 90 |
## Bias, Risks, Limitations and Recommendations
|
| 91 |
|
| 92 |
---
|