Terramind AD - Anomaly Detection for Disaster Monitoring

#11
by edornd - opened

TerraMind AD – Anomaly Detection for Disaster Monitoring

Why it matters

Quickly understanding where natural disasters, such as floods or wildfires, has a major impact is crucial for effective disaster management. Emergency responders need fine-grained change maps showing exactly where flooding occurred, where buildings collapsed, or where fires spread. This is often done with ad-hoc models, that do not transfer easily between hazard types, or do not generalize to different environments. The zero-shot approach described here maintains full 2D spatial information throughout the analysis pipeline, generating detailed change heatmaps that pinpoint affected areas without any labeled training data.

How is TerraMind used

This demo provides an unsupervised spatial anomaly detection system that fundamentally extends temporal analysis from 1D to 2D. Traditional approaches [1, 2] perform time series analysis on single image-level embeddings, collapsing all spatial information into a temporal signal. We instead maintain TerraMind's patch-level embeddings (14×14 patches per image) and perform parallel time series analysis on every spatial location independently [3].
The processing pipeline can be summarized as follows:

  1. Data retrieval an preprocessing: download S2-L2A data from STAC Catalogues (in this specific case, MS Planetary Computer), clean NaNs
  2. Cloud mask generation: generate image-level and feature-level cloud masks to exclude invalid patches at specific timestamps
  3. Feature extraction: using Terramind-B, extract a data cube of features over time
  4. Anomaly detection: run a RANSAC harmonic regression over each single patch timeseries, using only clear patches (non-clouded)
  5. AD filtering: accumulating over time, filter out sporadic and non-continuous anomalies

image

Our zero-shot pipeline applies PCA dimensionality reduction to the 768-D embeddings, then fits harmonic regression models to capture seasonal patterns at each spatial location. By analyzing residuals from these fitted models across the 2D grid, we identify anomalous changes that deviate from expected seasonal behavior. Every operation preserves the 2D spatial structure (time × height × width × features), enabling the pipeline to generate detailed spatial anomaly heatmaps rather than just temporal change scores.

This 1D→2D expansion means instead of asking "did this scene change?", we ask "where in this scene did change occur?", quickly providing the spatial granularity critical for disaster response.

A quick tour of the demo

We tested our zero-shot system on three major 2023 disasters: floods in Libya,

Libya Floods (September 2023, Derna): Our change heatmaps revealed flood extent and sediment deposition patterns, showing exactly which neighborhoods were most impacted by the catastrophic dam failures.

image

**Explosion in Beirut (August 2020, Lebanon) Despite the high variability of the environment, using stricter filtering pipelines, TerraMind's robust features is able to correctly identify areas where the major impact happened.

image

Wildfires (2023, Greece): Fire progression and burn scars appeared as distinct spatial signatures in our anomaly heatmaps, mapping the full extent of vegetation loss.

image

The visualization system generates time series plots showing principal component values, residuals over time, and event markers, both as spatial change maps and detailed time series for individual locations.
The spatial change map is obtained as accumulation of anomalies per pixel, over time: brighter colors indicate a higher number of spatial anomalies, after the event has occurred.

What we learned so far

Foundation model embeddings encode rich semantic information that can be easily exploited without further training for many different purposes. Harmonic regression with periodic functions capture seasonal patterns well enough for long periods of observation. This methodology, while simpler than other approaches in literature, was perfect for rapid prototyping and understanding the potential of the features at hand. Last, vectorized processing is critical: the initial implementation did not foresee the use of recent geospatial data formats like zarr, and this it was painfully slow until converted into array operations.

Current limitations

While TerraMind's Thinking in Modalities (TiM) feature is powerful, it may generate visual artifacts in the output modalities that require careful preprocessing, thus making its zero-shot use not immediately feasible. Future work with TiM-generated features would benefit from additional filtering and artifact mitigation strategies.

image

image

Example of visualization artifacts when using LULC (top) and S1GRD (bottom). Both images were generated using a simple tiling mechanism at 224x224 to avoid further artifacts introduced by the positional embeddings, with 112 pixels of overlap to mitigate border issues. While the contribution of the added modality is certainly visible and present, the visual artifacts introduced by the abnormal embeddings is enough to break the very simple AD approach in this case. A full generation approach in this case could be more beneficial.

Future Work

Deep-learning-based anomaly detection algorithms could certainly learn more sophisticated temporal patterns from the embedding sequences. More complex change detection approaches like LandTrendr [1], which models gradual disturbances and recovery trajectories, could complement the focus on abrupt events. It could also be worth investigating how to better incorporate spatial context: neighboring patches likely provide valuable information for distinguishing true disasters from isolated anomalies. Multi-scale analysis and automated thresholding methods could further improve detection sensitivity while reducing false positives.


References:

  1. Pasquarella, V.J., Arévalo, P., Bratley, K.H., Bullock, E.L., Gorelick, N., Yang, Z., & Kennedy, R.E. (2022). Demystifying LandTrendr and CCDC temporal segmentation. Int. J. Appl. Earth Obs. Geoinformation, 110, 102806.
  2. Element 84. (2023, October 17). Exploring unsupervised change detection with Sentinel-2 vector embeddings. Element 84. https://element84.com/machine-learning/exploring-unsupervised-change-detection-with-sentinel-2-vector-embeddings
  3. Schroer, K., Adhikari, B., & Moise, I. (2025, May 29). Revolutionizing earth observation with geospatial foundation models on AWS. AWS Machine Learning Blog. https://aws.amazon.com/blogs/machine-learning/revolutionizing-earth-observation-with-geospatial-foundation-models-on-aws/

Sign up or log in to comment