ClimX / README.md
ojpv's picture
Upload README.md
e9aff33 verified
metadata
language:
  - en
pretty_name: 'ClimX: extreme-aware climate model emulation'
tags:
  - climate
  - earth-system-model
  - machine-learning
  - emulation
  - extremes
  - netcdf
license: mit
task_categories:
  - time-series-forecasting
  - other

ClimX: a challenge for extreme-aware climate model emulation

ClimX is a challenge about building fast and accurate machine learning emulators for the NorESM2-MM Earth System Model, with evaluation focused on climate extremes rather than mean climate alone.

Dataset summary

This dataset contains the full-resolution ClimX data in NetCDF-4 format (targets + forcings, depending on split) with a native grid of 192×288192 \times 288 (about 11^\circ) resolution. It also contains the lite-resolution version, with a native grid of 12×1812 \times 18 (about 1616^\circ) resolution:

  • Lite-resolution: <1GB, 16×16\times spatially coarsened, meant for rapid prototyping.
  • Full-resolution: ~200GB, full-resolution data for large-scale training.

What you will do (high level)

You train an emulator that predicts daily 2D fields for 7 surface variables:

  • tas, tasmax, tasmin
  • pr, huss, psl, sfcWind

However, the benchmark targets are 15 extreme indices derived from daily temperature and precipitation (ETCCDI-style indices). The daily fields are an intermediate output your emulator produces (useful for diagnostics and for computing the indices).

Conceptually:

xt=g(ft,ft1,,ftα,xt1,xt2,,xtβ) x_t = g(f_t, f_{t-1}, \dots, f_{t-\alpha}, x_{t-1}, x_{t-2}, \dots, x_{t-\beta})

where ftf_t are forcings (greenhouse gases + aerosols) and xtx_t is the climate state.

Dataset structure

Spatial and temporal shape

Full-resolution daily fields:

  • Historical: lat: 192, lon: 288, time: 60224
  • Projections: lat: 192, lon: 288, time: 31389

Splits and scenarios (official challenge setup)

Training uses historical + several SSP scenarios; testing is on the held-out SSP2-4.5 scenario:

  • Train: historical (1850–2014) + ssp126, ssp370, ssp585 (2015–2100)
  • Test (held-out): ssp245 (2015–2100)

To avoid leakage, targets for ssp245 are withheld in the official evaluation; only the forcings are provided for that scenario. The full outputs will be released after the competition.

Evaluation metric

The primary leaderboard metric is the region-wise normalized Nash–Sutcliffe efficiency (nNSE), averaged over 15 climate extreme indices.

For each index vv, grid cell (i,j)(i,j), a validity mask V\mathcal{V} excludes cells with negligible temporal variability. Cell-level R2R^2 and nNSE are:

Rij2=1MSEijVart(gtij),nNSEij=Rij22Rij2 R^2_{ij} = 1 - \frac{\mathrm{MSE}_{ij}}{\mathrm{Var}_t(gt_{ij})}, \qquad \mathrm{nNSE}_{ij} = \frac{R^2_{ij}}{2 - R^2_{ij}}

For each AR6 land region kk, the area-weighted regional score is:

nNSEkv=(i,j)kVcosϕinNSEij(i,j)kVcosϕi \mathrm{nNSE}_{kv} = \frac{\sum_{(i,j)\in k \cap \mathcal{V}} \cos\phi_i \, \mathrm{nNSE}_{ij}}{\sum_{(i,j)\in k \cap \mathcal{V}} \cos\phi_i}

The final score averages uniformly over valid regions and indices:

S=1VvV1KvkKvnNSEkv S = \frac{1}{|V|} \sum_{v \in V} \frac{1}{|K_v|} \sum_{k \in K_v} \mathrm{nNSE}_{kv} S=1S=1 is perfect agreement, S=0S=0 corresponds to a mean predictor, and S1S \to -1 is pathological.

How to load the data

This dataset is distributed as NetCDF-4 files. There are two common ways to load it.

Option 1 (recommended): clone the ClimX code and use the helper loader

The ClimX repository already includes a helper module (src/data/climx_hf.py) that allows you to download the dataset from Hugging Face and open it as three lazily-loaded “virtual” xarray datasets:

git clone https://github.com/IPL-UV/ClimX.git
cd ClimX
pip install -U "huggingface-hub" xarray netcdf4 dask
from src.data.climx_hf import download_climx_from_hf, open_climx_virtual_datasets

# Download NetCDF artifacts from HF into a local cache directory.
root = download_climx_from_hf("/path/to/hf_cache", variant="full")

# Open as three virtual datasets (lazy / dask-friendly).
ds = open_climx_virtual_datasets(root, variant="full") # or "lite"

ds.hist # historical (targets + forcings)
ds.train # projections training scenarios (targets + forcings; excludes `ssp245` scenario)
ds.test_forcings # `ssp245` scenario forcings only (no targets)

Option 2: download NetCDFs and open with xarray directly

You can also download files from Hugging Face and open them with xarray.

Example:

from huggingface_hub import hf_hub_download
import xarray as xr

path = hf_hub_download(
    repo_id="isp-uv-es/ClimX",
    repo_type="dataset",
    filename="PATH/TO/A/FILE.nc",  # replace with an actual file in this dataset repo
)
ds = xr.open_dataset(path)
print(ds)

Links

License and usage

The dataset is released under MIT. In addition, if you are participating in the ClimX competition, please follow the competition rules (notably: restrictions on external climate training data and redistribution of competition data).