Checkpoints of models in CanadaWildFireDaily benchmark.
Each model is evaluated across three independent runs, with the random seed derived from the run index. We provide the optimal weights as best_checkpoint.pt, organized using the directory structure modelName/checkpoints_modelName_runN/ where N denotes the run index. For example the UNet checkpoints are provided under unet/checkpoints_unet_run1, unet/checkpoints_unet_run2, and unet/checkpoints_unet_run3. This strcuture applies to all evaluated models: unet_age, unet_attention, unet_convlstm, unet_segformer, UTAE
Mono-temporal models:
- Standard UNet(architecture: 'unet'): The baseline spatial U-Net model.
- Age-Encoding UNet(architecture: 'unet_age'): A U-Net that explicitly encodes the satellite age (the time gap in days between the fire event and the satellite acquisition).
- Attention UNet(architecture: 'unet_attention'): A U-Net utilizing attention gates in the skip connections to help the model focus on the most critical spatial features and suppress irrelevant background noise.
- UNet-SegFormer(architecture: 'unet_segformer'): A hybrid vision-transformer architecture that replaces the standard CNN encoder with SegFormer's Mix Vision Transformer (MiT), paired with a standard U-Net decoder for heavy pixel-level accuracy.
Multi-temporal models:
- Spatiotemporal UNet(architecture: 'unet_convlstm'): A U-Net featuring aConvLSTMbottleneck for recurrent time-series processing (e.g., 3-day sliding window forecasting).
- UT-AE(architecture: 'utae'): A temporal attention encoder-decoder baseline adapted from the ICCV 2021 U-TAE model for satellite image time series. This baseline uses the time-series offline samples fromTimeseries_Samples/, and the generator now stores sequence positions for the temporal attention encoder when you regenerate those samples.
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support