Update README.md
Browse files
README.md
CHANGED
|
@@ -20,16 +20,16 @@ datasets:
|
|
| 20 |
|
| 21 |
⚖️ [**Model weights**](./MAESTRO_FLAIR-HUB_base/checkpoints/pretrain-epoch=99.ckpt) <br>
|
| 22 |
⚙️ [**Model configuration**](./MAESTRO_FLAIR-HUB_base/.hydra/config_resolved.yaml) <br>
|
| 23 |
-
📂 [**Dataset splits**](
|
| 24 |
|
| 25 |
## Abstract
|
| 26 |
|
| 27 |
-
**MAESTRO** is a tailored adaptation of the Masked Autoencoder (MAE)
|
| 28 |
|
| 29 |
MAESTRO's contributions are as follows:
|
| 30 |
- **Extensive benchmarking of multimodal and multitemporal SSL:** Impact evaluation of various fusion strategies for multimodal and multitemporal SSL.
|
| 31 |
- **Patch-group-wise normalization:** Novel normalization scheme that normalizes reconstruction targets patch-wise within groups of highly correlated spectral bands.
|
| 32 |
-
- **MAESTRO:** Novel adaptation of the MAE that combines optimized fusion strategies with
|
| 33 |
|
| 34 |
<div style="position: relative; text-align: center;">
|
| 35 |
<img src="./media/Maestro_Overview.png" style="width: 100%; display: block; margin: 0 auto;"/>
|
|
@@ -59,7 +59,7 @@ We retain six distinct modalities:
|
|
| 59 |
Below is the reconstruction loss during pre-training on the combined training and validation ensembles, using patch-group-wise normalization and modality-weighted averaging proportional to token counts.
|
| 60 |
|
| 61 |
<div style="position: relative; text-align: center;">
|
| 62 |
-
<img src="./media/
|
| 63 |
</div>
|
| 64 |
|
| 65 |
<hr>
|
|
@@ -88,7 +88,7 @@ For optimal fine-tuning results with this model:
|
|
| 88 |
- Patch size: 2
|
| 89 |
- Channels: B02, B03, B04, B05, B06, B07, B08, B8A, B11, B12
|
| 90 |
- Use fixed cross-dataset grids for positional encodings proportional to ground sampling distance: `grid_pos_enc` ≈ 1.6 * `crop_meters`
|
| 91 |
-
|
| 92 |
Note that modality names must match between pre-training and fine-tuning.
|
| 93 |
|
| 94 |
Below are cross-dataset evaluation results obtained with these guidelines on TreeSatAI-TS and PASTIS-HD.
|
|
@@ -110,7 +110,7 @@ Below are cross-dataset evaluation results obtained with these guidelines on Tre
|
|
| 110 |
## 🚀 Getting started
|
| 111 |
|
| 112 |
Prerequisites:
|
| 113 |
-
- Fetch [Dataset splits](
|
| 114 |
- Fetch [model weights](./MAESTRO_FLAIR-HUB_base/checkpoints/pretrain-epoch=99.ckpt) and move them into `/path/to/experiments/MAESTRO_FLAIR-HUB_base/checkpoints/`
|
| 115 |
- Fetch [model configuration](./MAESTRO_FLAIR-HUB_base/.hydra/config_resolved.yaml) and move it into `/path/to/experiments/MAESTRO_FLAIR-HUB_base/.hydra/`
|
| 116 |
|
|
@@ -131,7 +131,7 @@ poetry run python main.py \
|
|
| 131 |
model.model=mae model.model_size=medium \
|
| 132 |
model.fusion_mode=group model.inter_depth=3 \
|
| 133 |
opt_pretrain.epochs=100 opt_probe.epochs=0 opt_finetune.epochs=0 \
|
| 134 |
-
opt_pretrain.batch_size=
|
| 135 |
datasets.name_dataset=flair \
|
| 136 |
datasets.flair.filter_inputs=[aerial,dem,spot,s2,s1_asc,s1_des] \
|
| 137 |
datasets.flair.crop_meters=102.4 datasets.flair.grid_pos_enc=160 \
|
|
@@ -147,13 +147,13 @@ poetry run python main.py \
|
|
| 147 |
|
| 148 |
Fine-tuning on TreeSatAI-TS:
|
| 149 |
```bash
|
| 150 |
-
# batch size
|
| 151 |
# load pre-trained model "MAESTRO_FLAIR-HUB_base"
|
| 152 |
poetry run python main.py \
|
| 153 |
model.model=mae model.model_size=medium \
|
| 154 |
model.fusion_mode=group model.inter_depth=3 \
|
| 155 |
opt_pretrain.epochs=0 opt_probe.epochs=10 opt_finetune.epochs=50 \
|
| 156 |
-
opt_probe.batch_size=
|
| 157 |
opt_finetune.monitor=treesat_mlc_thresh/weighted_f1_val \
|
| 158 |
datasets.name_dataset=treesatai_ts \
|
| 159 |
datasets.treesatai_ts.filter_inputs=[aerial,s2,s1_asc,s1_des] \
|
|
@@ -169,13 +169,13 @@ poetry run python main.py \
|
|
| 169 |
|
| 170 |
Fine-tuning on PASTIS-HD:
|
| 171 |
```bash
|
| 172 |
-
# batch size
|
| 173 |
# load pre-trained model "MAESTRO_FLAIR-HUB_base"
|
| 174 |
poetry run python main.py \
|
| 175 |
model.model=mae model.model_size=medium \
|
| 176 |
model.fusion_mode=group model.inter_depth=3 \
|
| 177 |
opt_pretrain.epochs=0 opt_probe.epochs=10 opt_finetune.epochs=50 \
|
| 178 |
-
opt_probe.batch_size=
|
| 179 |
opt_finetune.monitor=pastis_seg/average_iou_val \
|
| 180 |
datasets.name_dataset=pastis_hd \
|
| 181 |
datasets.pastis_hd.filter_inputs=[spot,s2,s1_asc,s1_des] \
|
|
@@ -198,13 +198,14 @@ poetry run python main.py \
|
|
| 198 |
model.model=mae model.model_size=medium \
|
| 199 |
model.fusion_mode=group model.inter_depth=3 \
|
| 200 |
opt_pretrain.epochs=0 opt_probe.epochs=15 opt_finetune.epochs=100 \
|
| 201 |
-
opt_probe.batch_size=
|
| 202 |
opt_finetune.monitor=cosia/average_iou_val \
|
| 203 |
datasets.name_dataset=flair \
|
| 204 |
datasets.flair.version=flair2 \
|
| 205 |
-
datasets.flair.filter_inputs=[aerial,s2] \
|
| 206 |
datasets.flair.crop_meters=102.4 datasets.flair.grid_pos_enc=160 \
|
| 207 |
datasets.flair.aerial.image_size=512 datasets.flair.aerial.patch_size.mae=16 \
|
|
|
|
| 208 |
datasets.flair.s2.image_size=10 datasets.flair.s2.patch_size.mae=2 \
|
| 209 |
datasets.root_dir=/path/to/dataset/dir datasets.flair.csv_dir=/path/to/dataset/dir/FLAIR-HUB datasets.flair.rel_dir=FLAIR-HUB \
|
| 210 |
run.exp_dir=/path/to/experiments/dir run.exp_name=MAESTRO_FLAIR-HUB-x-FLAIR2_base \
|
|
@@ -219,12 +220,13 @@ poetry run python main.py \
|
|
| 219 |
model.model=mae model.model_size=medium \
|
| 220 |
model.fusion_mode=group model.inter_depth=3 \
|
| 221 |
opt_pretrain.epochs=0 opt_probe.epochs=15 opt_finetune.epochs=100 \
|
| 222 |
-
opt_probe.batch_size=
|
| 223 |
opt_finetune.monitor=cosia/average_iou_val \
|
| 224 |
datasets.name_dataset=flair \
|
| 225 |
-
datasets.flair.filter_inputs=[aerial,s2,s1_asc,s1_des] \
|
| 226 |
datasets.flair.crop_meters=102.4 datasets.flair.grid_pos_enc=160 \
|
| 227 |
datasets.flair.aerial.image_size=512 datasets.flair.aerial.patch_size.mae=16 \
|
|
|
|
| 228 |
datasets.flair.s2.image_size=10 datasets.flair.s2.patch_size.mae=2 \
|
| 229 |
datasets.flair.s1_asc.image_size=10 datasets.flair.s1_asc.patch_size.mae=2 \
|
| 230 |
datasets.flair.s1_des.image_size=10 datasets.flair.s1_des.patch_size.mae=2 \
|
|
|
|
| 20 |
|
| 21 |
⚖️ [**Model weights**](./MAESTRO_FLAIR-HUB_base/checkpoints/pretrain-epoch=99.ckpt) <br>
|
| 22 |
⚙️ [**Model configuration**](./MAESTRO_FLAIR-HUB_base/.hydra/config_resolved.yaml) <br>
|
| 23 |
+
📂 [**Dataset splits**](dataset_splits) <br>
|
| 24 |
|
| 25 |
## Abstract
|
| 26 |
|
| 27 |
+
**MAESTRO** is a tailored adaptation of the Masked Autoencoder (MAE) that effectively orchestrates the use of multimodal, multitemporal, and multispectral Earth Observation (EO) data. Evaluated on four EO datasets, MAESTRO sets a new state-of-the-art on tasks that strongly rely on multitemporal dynamics, while remaining competitive on tasks dominated by a single monotemporal modality.
|
| 28 |
|
| 29 |
MAESTRO's contributions are as follows:
|
| 30 |
- **Extensive benchmarking of multimodal and multitemporal SSL:** Impact evaluation of various fusion strategies for multimodal and multitemporal SSL.
|
| 31 |
- **Patch-group-wise normalization:** Novel normalization scheme that normalizes reconstruction targets patch-wise within groups of highly correlated spectral bands.
|
| 32 |
+
- **MAESTRO:** Novel adaptation of the MAE that combines optimized fusion strategies with patch-group-wise normalization.
|
| 33 |
|
| 34 |
<div style="position: relative; text-align: center;">
|
| 35 |
<img src="./media/Maestro_Overview.png" style="width: 100%; display: block; margin: 0 auto;"/>
|
|
|
|
| 59 |
Below is the reconstruction loss during pre-training on the combined training and validation ensembles, using patch-group-wise normalization and modality-weighted averaging proportional to token counts.
|
| 60 |
|
| 61 |
<div style="position: relative; text-align: center;">
|
| 62 |
+
<img src="./media/Reconstruction_Loss.png" style="width: 100%; display: block; margin: 0 auto;"/>
|
| 63 |
</div>
|
| 64 |
|
| 65 |
<hr>
|
|
|
|
| 88 |
- Patch size: 2
|
| 89 |
- Channels: B02, B03, B04, B05, B06, B07, B08, B8A, B11, B12
|
| 90 |
- Use fixed cross-dataset grids for positional encodings proportional to ground sampling distance: `grid_pos_enc` ≈ 1.6 * `crop_meters`
|
| 91 |
+
|
| 92 |
Note that modality names must match between pre-training and fine-tuning.
|
| 93 |
|
| 94 |
Below are cross-dataset evaluation results obtained with these guidelines on TreeSatAI-TS and PASTIS-HD.
|
|
|
|
| 110 |
## 🚀 Getting started
|
| 111 |
|
| 112 |
Prerequisites:
|
| 113 |
+
- Fetch [Dataset splits](dataset_splits) and move them to each dataset directory
|
| 114 |
- Fetch [model weights](./MAESTRO_FLAIR-HUB_base/checkpoints/pretrain-epoch=99.ckpt) and move them into `/path/to/experiments/MAESTRO_FLAIR-HUB_base/checkpoints/`
|
| 115 |
- Fetch [model configuration](./MAESTRO_FLAIR-HUB_base/.hydra/config_resolved.yaml) and move it into `/path/to/experiments/MAESTRO_FLAIR-HUB_base/.hydra/`
|
| 116 |
|
|
|
|
| 131 |
model.model=mae model.model_size=medium \
|
| 132 |
model.fusion_mode=group model.inter_depth=3 \
|
| 133 |
opt_pretrain.epochs=100 opt_probe.epochs=0 opt_finetune.epochs=0 \
|
| 134 |
+
opt_pretrain.batch_size=9 trainer.num_nodes=4 \
|
| 135 |
datasets.name_dataset=flair \
|
| 136 |
datasets.flair.filter_inputs=[aerial,dem,spot,s2,s1_asc,s1_des] \
|
| 137 |
datasets.flair.crop_meters=102.4 datasets.flair.grid_pos_enc=160 \
|
|
|
|
| 147 |
|
| 148 |
Fine-tuning on TreeSatAI-TS:
|
| 149 |
```bash
|
| 150 |
+
# batch size 24 on 1 node with 4 GPUs per node
|
| 151 |
# load pre-trained model "MAESTRO_FLAIR-HUB_base"
|
| 152 |
poetry run python main.py \
|
| 153 |
model.model=mae model.model_size=medium \
|
| 154 |
model.fusion_mode=group model.inter_depth=3 \
|
| 155 |
opt_pretrain.epochs=0 opt_probe.epochs=10 opt_finetune.epochs=50 \
|
| 156 |
+
opt_probe.batch_size=24 opt_finetune.batch_size=24 trainer.num_nodes=1 \
|
| 157 |
opt_finetune.monitor=treesat_mlc_thresh/weighted_f1_val \
|
| 158 |
datasets.name_dataset=treesatai_ts \
|
| 159 |
datasets.treesatai_ts.filter_inputs=[aerial,s2,s1_asc,s1_des] \
|
|
|
|
| 169 |
|
| 170 |
Fine-tuning on PASTIS-HD:
|
| 171 |
```bash
|
| 172 |
+
# batch size 12 on 1 node with 4 GPUs per node
|
| 173 |
# load pre-trained model "MAESTRO_FLAIR-HUB_base"
|
| 174 |
poetry run python main.py \
|
| 175 |
model.model=mae model.model_size=medium \
|
| 176 |
model.fusion_mode=group model.inter_depth=3 \
|
| 177 |
opt_pretrain.epochs=0 opt_probe.epochs=10 opt_finetune.epochs=50 \
|
| 178 |
+
opt_probe.batch_size=12 opt_finetune.batch_size=12 trainer.num_nodes=1 \
|
| 179 |
opt_finetune.monitor=pastis_seg/average_iou_val \
|
| 180 |
datasets.name_dataset=pastis_hd \
|
| 181 |
datasets.pastis_hd.filter_inputs=[spot,s2,s1_asc,s1_des] \
|
|
|
|
| 198 |
model.model=mae model.model_size=medium \
|
| 199 |
model.fusion_mode=group model.inter_depth=3 \
|
| 200 |
opt_pretrain.epochs=0 opt_probe.epochs=15 opt_finetune.epochs=100 \
|
| 201 |
+
opt_probe.batch_size=6 opt_finetune.batch_size=6 trainer.num_nodes=2 \
|
| 202 |
opt_finetune.monitor=cosia/average_iou_val \
|
| 203 |
datasets.name_dataset=flair \
|
| 204 |
datasets.flair.version=flair2 \
|
| 205 |
+
datasets.flair.filter_inputs=[aerial,dem,s2] \
|
| 206 |
datasets.flair.crop_meters=102.4 datasets.flair.grid_pos_enc=160 \
|
| 207 |
datasets.flair.aerial.image_size=512 datasets.flair.aerial.patch_size.mae=16 \
|
| 208 |
+
datasets.flair.dem.image_size=512 datasets.flair.dem.patch_size.mae=32 \
|
| 209 |
datasets.flair.s2.image_size=10 datasets.flair.s2.patch_size.mae=2 \
|
| 210 |
datasets.root_dir=/path/to/dataset/dir datasets.flair.csv_dir=/path/to/dataset/dir/FLAIR-HUB datasets.flair.rel_dir=FLAIR-HUB \
|
| 211 |
run.exp_dir=/path/to/experiments/dir run.exp_name=MAESTRO_FLAIR-HUB-x-FLAIR2_base \
|
|
|
|
| 220 |
model.model=mae model.model_size=medium \
|
| 221 |
model.fusion_mode=group model.inter_depth=3 \
|
| 222 |
opt_pretrain.epochs=0 opt_probe.epochs=15 opt_finetune.epochs=100 \
|
| 223 |
+
opt_probe.batch_size=6 opt_finetune.batch_size=6 trainer.num_nodes=4 \
|
| 224 |
opt_finetune.monitor=cosia/average_iou_val \
|
| 225 |
datasets.name_dataset=flair \
|
| 226 |
+
datasets.flair.filter_inputs=[aerial,dem,s2,s1_asc,s1_des] \
|
| 227 |
datasets.flair.crop_meters=102.4 datasets.flair.grid_pos_enc=160 \
|
| 228 |
datasets.flair.aerial.image_size=512 datasets.flair.aerial.patch_size.mae=16 \
|
| 229 |
+
datasets.flair.dem.image_size=512 datasets.flair.dem.patch_size.mae=32 \
|
| 230 |
datasets.flair.s2.image_size=10 datasets.flair.s2.patch_size.mae=2 \
|
| 231 |
datasets.flair.s1_asc.image_size=10 datasets.flair.s1_asc.patch_size.mae=2 \
|
| 232 |
datasets.flair.s1_des.image_size=10 datasets.flair.s1_des.patch_size.mae=2 \
|