SIAD World Model (Small)

Satellite Imagery Anticipatory Dynamics - A transformer-based world model for predicting future satellite observations.

Model Description

This model predicts future satellite imagery based on:

  • Current satellite observations (Sentinel-2, Sentinel-1, VIIRS nightlights)
  • Climate action variables (rainfall and temperature anomalies)

Uses JEPA (Joint Embedding Predictive Architecture) with token-based spatial representations.

Architecture

  • Size: 157,082,688 parameters (157.1M)
  • Type: small variant
  • Latent Dimension: 768
  • Encoder: 6 transformer blocks, 12 heads
  • Transition Model: 8 transformer blocks, 12 heads
  • Spatial Tokens: 256 tokens (16×16 grid)
  • Input Channels: 8 (Sentinel-2: B2,B3,B4,B8 | Sentinel-1: VV,VH | VIIRS | mask)
  • Rollout Horizon: 6 months

Training

  • Best Val Loss: 0.0941
  • Epochs: 41

Quick Start

from transformers import AutoModel
import torch

# Load model from HuggingFace Hub
model = AutoModel.from_pretrained("OzLabs/siad-wm-small", trust_remote_code=True)
model.inference_mode()

# Prepare inputs
obs_context = torch.randn(1, 8, 256, 256)  # Current observation
actions = torch.randn(1, 6, 2)  # 6-month climate actions

# Run prediction
with torch.no_grad():
    z0 = model.encode(obs_context)
    z_pred = model.rollout(z0, actions, H=6)
    x_pred = model.decode(z_pred)  # [1, 6, 8, 256, 256]

print(f"Predicted 6 months: {x_pred.shape}")

Advanced Usage

# Full forward pass with loss computation
outputs = model(
    obs_context=obs_context,
    actions_rollout=actions,
    obs_targets=targets,  # Ground truth for loss
    return_dict=True
)

print(f"Loss: {outputs.loss}")
print(f"Predictions: {outputs.predictions.shape}")
print(f"Metrics: {outputs.metrics}")

Model Configuration

This is the small configuration:

latent_dim: 768
encoder_blocks: 6
encoder_heads: 12
encoder_mlp_dim: 3072
transition_blocks: 8
transition_heads: 12
transition_mlp_dim: 3072
dropout: 0.1

Citation

@misc{siad_world_model,
    title={SIAD: Satellite Imagery Anticipatory Dynamics},
    author={OzLabs.ai},
    year={2025},
    howpublished={\url{https://huggingface.co/OzLabs/siad-wm-small}},
}

Links

Downloads last month
33
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support