Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,61 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: etalab-2.0
|
| 3 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: etalab-2.0
|
| 3 |
+
pipeline_tag: image-segmentation
|
| 4 |
+
tags:
|
| 5 |
+
- semantic segmentation
|
| 6 |
+
- pytorch
|
| 7 |
+
- SSL
|
| 8 |
+
library_name: pytorch
|
| 9 |
+
---
|
| 10 |
+
|
| 11 |
+
We introduce **MAESTRO**, a tailored adaptation of the Masked Autoencoder (MAE) framework that effectively orchestrates the use of multimodal, multitemporal, and multispectral Earth Observation (EO) data. Evaluated on four EO datasets, MAESTRO sets a new state-of-the-art on tasks that strongly rely on multitemporal dynamics, while remaining highly competitive on tasks dominated by a single monotemporal modality.
|
| 12 |
+
|
| 13 |
+
Our contributions are as follows:
|
| 14 |
+
- **Extensive benchmarking of multimodal and multitemporal SSL:** Impact evaluation of various fusion strategies for multimodal and multitemporal SSL.
|
| 15 |
+
- **Patch-group-wise normalization:** Novel normalization scheme that normalizes reconstruction targets patch-wise within groups of highly correlated spectral bands.
|
| 16 |
+
- **MAESTRO:** Novel adaptation of the MAE that combines optimized fusion strategies with our tailored patch-group-wise normalization..
|
| 17 |
+
|
| 18 |
+
<div style="position: relative; text-align: center;">
|
| 19 |
+
<img src="./media/Maestro_Overview.png" alt="Classes distribution." style="width: 100%; display: block; margin: 0 auto;"/>
|
| 20 |
+
</div>
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+
|
| 24 |
+
|
| 25 |
+
|
| 26 |
+
💻 **Code repository:** https://github.com/IGNF/MAESTRO<br>
|
| 27 |
+
📃 **Paper:** https://arxiv.org/abs/2508.10894
|
| 28 |
+
|
| 29 |
+
<hr>
|
| 30 |
+
|
| 31 |
+
|
| 32 |
+
## Pre-training Dataset
|
| 33 |
+
|
| 34 |
+
|
| 35 |
+
<hr>
|
| 36 |
+
|
| 37 |
+
## Cross-dataset Evaluation
|
| 38 |
+
|
| 39 |
+
Benchmark results on 4 datasets :
|
| 40 |
+
|
| 41 |
+
<p align="center">
|
| 42 |
+
|
| 43 |
+
| Model | Pre-training dataset | TreeSatAI-TS | PASTIS-HD | FLAIR#2 | FLAIR-HUB |
|
| 44 |
+
|--------------------|-----------------------|--------------|-----------|---------|-----------|
|
| 45 |
+
| MAESTRO (ours) | FLAIR-HUB | **79.6** | **68.0** | - | - |
|
| 46 |
+
| MAESTRO (ours) | S2-NAIP urban | 78.8 | 67.4 | 62.6 | 64.6 |
|
| 47 |
+
| DINO-v2 | LVD-142M | 76.7 | 64.4 | **64.2**| 66.0 |
|
| 48 |
+
| DINO-v2 sat. | Maxar Vivid2 | 76.3 | 64.0 | 63.5 | **66.0** |
|
| 49 |
+
| DOFA | DOFA MM | 76.0 | 62.9 | 62.3 | 65.1 |
|
| 50 |
+
| CROMA | SSL4EO | 70.5 | 65.0 | 39.0 | 44.3 |
|
| 51 |
+
| Prithvi-EO-2.0 | HLS | 75.6 | 66.2 | 41.8 | 44.9 |
|
| 52 |
+
| SatMAE | fMoW RGB+S | 76.9 | 66.6 | 42.5 | 45.0 |
|
| 53 |
+
</p>
|
| 54 |
+
|
| 55 |
+
|
| 56 |
+
<hr>
|
| 57 |
+
|
| 58 |
+
## Usage
|
| 59 |
+
|
| 60 |
+
|
| 61 |
+
<hr>
|