Update README.md
Browse files
README.md
CHANGED
|
@@ -2,59 +2,35 @@
|
|
| 2 |
license: cc-by-nc-4.0
|
| 3 |
---
|
| 4 |
|
| 5 |
-
<h1> <img src="https://raw.githubusercontent.com/IPL-UV/cloudsen12_models/main/notebooks/logo.webp" alt="Logo" width='5%'> CloudSEN12 trained models</h1>
|
| 6 |
|
| 7 |
This repository contains the trained models of the publications:
|
| 8 |
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
> Aybar, C., Montero, D., Mateo-García, G., & Gómez-Chova, L. (2023). **Lessons Learned From Cloudsen12 Dataset: Identifying Incorrect Annotations in Cloud Semantic Segmentation Datasets**. IGARSS 2023 - 2023 IEEE International Geoscience and Remote Sensing Symposium, 892–895. [DOI: 10.1109/IGARSS52108.2023.10282381](https://doi.org/10.1109/IGARSS52108.2023.10282381)
|
| 12 |
-
|
| 13 |
-
> Mateo-García, G., Aybar, C., Acciarini, G., Růžička, V., Meoni, G., Longépé, N., & Gómez-Chova, L. (2023). **Onboard Cloud Detection and Atmospheric Correction with Deep Learning Emulators**. IGARSS 2023 - 2023 IEEE International Geoscience and Remote Sensing Symposium, 1875–1878. [DOI: 10.1109/IGARSS52108.2023.10282605](https://doi.org/10.1109/IGARSS52108.2023.10282605)
|
| 14 |
-
|
| 15 |
-
> Aybar, C., Bautista, L., Montero, D., Contreras, J., Ayala, D., Prudencio, F., Loja, J., Ysuhuaylas, L., Herrera, F., Gonzales, K., Valladares, J., Flores, L. A., Mamani, E., Quiñonez, M., Fajardo, R., Espinoza, W., Limas, A., Yali, R., Alcántara, A., Leyva, M., Loayza-Muro, M., Willems, M., Mateo-García, G. & Gómez-Chova, L. (2024). **CloudSEN12+: The largest dataset of expert-labeled pixels for cloud and cloud shadow detection in Sentinel-2**. Data in Brief, 110852. https://doi.org/10.1016/j.dib.2024.110852
|
| 16 |
-
|
| 17 |
|
| 18 |
We include the trained models:
|
| 19 |
-
* **
|
| 20 |
-
* **
|
| 21 |
-
* **
|
| 22 |
-
* **landsat30** Model trained on the common bands of Sentinel-2 L1C and Landsat 8 and 9 in the CloudSEN12 dataset
|
| 23 |
-
* **UNetMobV2_V1** Model trained on the 13 bands of Sentinel-2 L1C in the CloudSEN12 dataset included in CloudSEN12+
|
| 24 |
-
* **UNetMobV2_V2** Model trained on the 13 bands of Sentinel-2 L1C in the CloudSEN12+
|
| 25 |
|
| 26 |
-
In order to run any of these models in
|
| 27 |
|
| 28 |
-
<img src="https://raw.githubusercontent.com/IPL-UV/cloudsen12_models/main/notebooks/example_flood_dubai_2024.png">
|
| 29 |
|
| 30 |
If you find this work useful please cite:
|
| 31 |
|
| 32 |
```
|
| 33 |
-
@article{
|
| 34 |
-
title = {
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
doi = {10.
|
| 38 |
-
journal = {Data in Brief},
|
| 39 |
-
author = {Aybar, Cesar and Bautista, Lesly and Montero, David and Contreras, Julio and Ayala, Daryl and Prudencio, Fernando and Loja, Jhomira and Ysuhuaylas, Luis and Herrera, Fernando and Gonzales, Karen and Valladares, Jeanett and Flores, Lucy A. and Mamani, Evelin and Quiñonez, Maria and Fajardo, Rai and Espinoza, Wendy and Limas, Antonio and Yali, Roy and Alcántara, Alejandro and Leyva, Martin and Loayza-Muro, Rau´l and Willems, Bram and Mateo-García, Gonzalo and Gómez-Chova, Luis},
|
| 40 |
-
month = aug,
|
| 41 |
-
year = {2024},
|
| 42 |
-
pages = {110852},
|
| 43 |
-
}
|
| 44 |
-
|
| 45 |
-
@article{aybar_cloudsen12_2022,
|
| 46 |
-
title = {{CloudSEN12}, a global dataset for semantic understanding of cloud and cloud shadow in {Sentinel}-2},
|
| 47 |
-
volume = {9},
|
| 48 |
-
issn = {2052-4463},
|
| 49 |
-
url = {https://www.nature.com/articles/s41597-022-01878-2},
|
| 50 |
-
doi = {10.1038/s41597-022-01878-2},
|
| 51 |
number = {1},
|
| 52 |
-
urldate = {2023-
|
| 53 |
-
journal = {Scientific
|
| 54 |
-
author = {
|
| 55 |
-
month =
|
| 56 |
-
year = {
|
| 57 |
-
pages = {
|
| 58 |
}
|
| 59 |
```
|
| 60 |
|
|
@@ -64,7 +40,7 @@ If you find this work useful please cite:
|
|
| 64 |
All pre-trained models in this repository are released under a [Creative Commons non-commercial licence](https://creativecommons.org/licenses/by-nc/4.0/legalcode.txt)
|
| 65 |
|
| 66 |
|
| 67 |
-
The `
|
| 68 |
|
| 69 |
|
| 70 |
## Acknowledgments
|
|
|
|
| 2 |
license: cc-by-nc-4.0
|
| 3 |
---
|
| 4 |
|
|
|
|
| 5 |
|
| 6 |
This repository contains the trained models of the publications:
|
| 7 |
|
| 8 |
+
Portalés-Julià, Enrique and Mateo-García, Gonzalo and Gómez-Chova, Luis, **Understanding Flood Detection Models Across Sentinel-1 and Sentinel-2 Modalities and Benchmark Datasets.** Available at SSRN: https://ssrn.com/abstract=5118486 or http://dx.doi.org/10.2139/ssrn.5118486
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 9 |
|
| 10 |
We include the trained models:
|
| 11 |
+
* **sm_unet_s2** Model trained on the Sentinel-2 L1C bands `["B08", "B04", "B03", "B02"]` from the S1S2Water and WorldFloods datasets.
|
| 12 |
+
* **sm_unet_s1** Model trained on the Sentinel-1 GRD data (`[VV, VH]` channels) from the S1S2Water and Kuro Siwo datasets.
|
| 13 |
+
* **mm_unet_s1s2** Dual stream with modality token model, trained on the S1S2Water (Sentinel-1 GRD and Sentinel-2 L1C data), WorldFloods (Sentinel-2 L1C) and Kuro Siwo Sentinel-1 GRD data.
|
|
|
|
|
|
|
|
|
|
| 14 |
|
| 15 |
+
In order to run any of these models in Sentinel-1 and/or Sentinel-2 data see the tutorial [*Run model*](https://github.com/kipoju/udl4fl/blob/main/notebooks/run_in_gee_image.ipynb) in the [udl4fl](https://github.com/kipoju/ufl4fl) package.
|
| 16 |
|
| 17 |
+
<!-- <img src="https://raw.githubusercontent.com/IPL-UV/cloudsen12_models/main/notebooks/example_flood_dubai_2024.png"> -->
|
| 18 |
|
| 19 |
If you find this work useful please cite:
|
| 20 |
|
| 21 |
```
|
| 22 |
+
@article{portales-julia_global_2023,
|
| 23 |
+
title = {Global flood extent segmentation in optical satellite images},
|
| 24 |
+
volume = {13},
|
| 25 |
+
issn = {2045-2322},
|
| 26 |
+
doi = {10.1038/s41598-023-47595-7},
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 27 |
number = {1},
|
| 28 |
+
urldate = {2023-11-30},
|
| 29 |
+
journal = {Scientific Reports},
|
| 30 |
+
author = {Portalés-Julià, Enrique and Mateo-García, Gonzalo and Purcell, Cormac and Gómez-Chova, Luis},
|
| 31 |
+
month = nov,
|
| 32 |
+
year = {2023},
|
| 33 |
+
pages = {20316},
|
| 34 |
}
|
| 35 |
```
|
| 36 |
|
|
|
|
| 40 |
All pre-trained models in this repository are released under a [Creative Commons non-commercial licence](https://creativecommons.org/licenses/by-nc/4.0/legalcode.txt)
|
| 41 |
|
| 42 |
|
| 43 |
+
The `udl4fl` python package is published under a [GNU Lesser GPL v3 licence](https://www.gnu.org/licenses/lgpl-3.0.en.html)
|
| 44 |
|
| 45 |
|
| 46 |
## Acknowledgments
|