FAST-EO Use Case 4 - Estimation of Soil Properties
This repository provides a training and evaluation pipeline for soil-property regression (P, K, Mg, pH) on Hyperview data using Terratorch and the terramind_v1_base backbone.
Overview
The goal of this use case is to fine-tune TerraMind-Base for predicting soil properties from remote-sensing inputs. The codebase supports multiple dataset splits (including external test splits) and multiple configuration variants (full vs. small vs. big vs. intuition/enmap evaluation).
Repository structure
configs/- Terratorch training/inference configs.
- Decoder variants:
UNetDecoder,UperNetDecoder. - Split/size variants:
none(full),small,big,i1.
datasets/hyperview_dataset.py: dataset with split support:test,test_dat,test_intuition,test_enmap- size filters:
only_11x11,exclude_11x11
hyperview_datamodule.py: Lightning DataModule and transforms.
callback_hooks/loss_logging_callback.py: epoch-level loss logging callback.
prepare_submission.py- checkpoint inference and export of
submission.csvand metrics.
- checkpoint inference and export of
hyperview_subimssion.py- baseline, class mapping, and evaluation helpers.
hybrid_models.ipynb- notebook for hybrid evaluation and ranking of full/small/big submission combinations.
Configuration rules
In each config, set dataset and label paths correctly:
data.init_args.data_rootdata.init_args.label_train_pathdata.init_args.label_test_path
Supported splits
Training is supported on standard Hyperview:
- full (
test_data) small(11x11 only)big(excluding 11x11)
Testing/evaluation is supported on:
- Hyperview full
- Hyperview
small - Hyperview
big test_dattest_intuition(intuition1)test_enmap
For test_enmap, aoi is mandatory.
Training
UperNet (default)
terratorch fit -c configs/terramind_v1_base_hyperview_upernet_none.yaml
UperNet small (11x11 only)
terratorch fit -c configs/terramind_v1_base_hyperview_upernet_none_small.yaml
UperNet big (excluding 11x11)
terratorch fit -c configs/terramind_v1_base_hyperview_upernet_none_big.yaml
UperNet intuition split
terratorch fit -c configs/terramind_v1_base_hyperview_upernet_none_i1.yaml
UNet
terratorch fit -c configs/terramind_v1_base_hyperview_unet_none.yaml
End-to-end script
Run the full train+test pipeline with:
./run_train_test.sh
The script executes, in order:
- train
upernet_noneand generatesubmissions/upernet_none - train
upernet_none_smalland generatesubmissions/upernet_none_small - train
upernet_none_bigand generatesubmissions/upernet_none_big - train
unet_noneand generatesubmissions/unet_none - evaluate
upernet_none_i1usingconfigs/terramind_v1_base_hyperview_upernet_none_i1.yaml - run external-model evaluations for:
upernet_none_externalupernet_none_i1_externalupernet_none_enmap_20231109T101043Z_externalupernet_none_enmap_20231109T101043Z
Submission generation
python3 prepare_submission.py --model_dir runs/terratorch_hyperview_upernet_none --config configs/terramind_v1_base_hyperview_upernet_none.yaml --output_dir submissions/upernet_none
Example for small:
python3 prepare_submission.py --model_dir runs/terratorch_hyperview_upernet_none_small --config configs/terramind_v1_base_hyperview_upernet_none_small.yaml --output_dir submissions/upernet_none_small
Example for big:
python3 prepare_submission.py --model_dir runs/terratorch_hyperview_upernet_none_big --config configs/terramind_v1_base_hyperview_upernet_none_big.yaml --output_dir submissions/upernet_none_big
Example for test_intuition:
python3 prepare_submission.py --model_dir runs/terratorch_hyperview_upernet_none --config configs/terramind_v1_base_hyperview_upernet_none_i1.yaml --output_dir submissions/upernet_none_i1
- Downloads last month
- -
Model tree for KPLabs/TerraMind-HYPERVIEW
Base model
ibm-esa-geospatial/TerraMind-1.0-base