File size: 5,458 Bytes
6214201
 
 
 
 
 
 
 
 
 
 
 
 
 
33a2f32
6214201
10f09f3
6214201
10f09f3
6214201
10f09f3
6214201
10f09f3
6214201
10f09f3
6214201
10f09f3
33a2f32
6214201
10f09f3
6214201
 
 
 
 
 
 
 
 
 
10f09f3
fcb12f6
6214201
fcb12f6
10f09f3
6214201
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e9aff33
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6214201
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2f0ca76
10f09f3
 
2f0ca76
6214201
2f0ca76
 
10f09f3
 
 
33a2f32
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
---
language:
  - en
pretty_name: "ClimX: extreme-aware climate model emulation"
tags:
  - climate
  - earth-system-model
  - machine-learning
  - emulation
  - extremes
  - netcdf
license: mit
task_categories:
  - time-series-forecasting
  - other
---

<!-- NOTE: Math formatting convention: inline math uses \\( ... \\) and math blocks use $$ ... $$. -->

# ClimX: a challenge for extreme-aware climate model emulation

ClimX is a challenge about building **fast and accurate machine learning emulators** for the NorESM2-MM Earth System Model, with evaluation focused on **climate extremes** rather than mean climate alone.

## Dataset summary

This dataset contains the **full-resolution** ClimX data in **NetCDF-4** format (targets + forcings, depending on split) with a native grid of \\(192 \times 288\\) (about \\(1^\circ\\)) resolution. It also contains the **lite-resolution** version, with a native grid of \\(12 \times 18\\) (about \\(16^\circ\\)) resolution:

- **Lite-resolution**: <1GB, \\(16\times\\) spatially coarsened, meant for rapid prototyping.
- **Full-resolution**: ~200GB, full-resolution data for large-scale training.

## What you will do (high level)

You train an emulator that predicts **daily** 2D fields for 7 surface variables:

- `tas`, `tasmax`, `tasmin`
- `pr`, `huss`, `psl`, `sfcWind`

However, the **benchmark targets are 15 extreme indices** derived from daily temperature and precipitation (ETCCDI-style indices). The daily fields are an **intermediate output** your emulator produces (useful for diagnostics and for computing the indices).

Conceptually:

$$
x_t = g(f_t, f_{t-1}, \dots, f_{t-\alpha}, x_{t-1}, x_{t-2}, \dots, x_{t-\beta})
$$

where \\(f_t\\) are forcings (greenhouse gases + aerosols) and \\(x_t\\) is the climate state.

## Dataset structure

### Spatial and temporal shape

Full-resolution daily fields:

- **Historical**: `lat: 192, lon: 288, time: 60224`
- **Projections**: `lat: 192, lon: 288, time: 31389`

### Splits and scenarios (official challenge setup)

Training uses historical + several SSP scenarios; testing is on the held-out **SSP2-4.5** scenario:

- **Train**: historical (1850–2014) + `ssp126`, `ssp370`, `ssp585` (2015–2100)
- **Test (held-out)**: `ssp245` (2015–2100)

To avoid leakage, **targets for `ssp245` are withheld** in the official evaluation; only the **forcings** are provided for that scenario. The full outputs will be released after the competition.

## Evaluation metric

The primary leaderboard metric is the region-wise **normalized Nash–Sutcliffe efficiency (nNSE)**, averaged over 15 climate extreme indices.

For each index \\(v\\), grid cell \\((i,j)\\), a validity mask \\(\mathcal{V}\\) excludes cells with negligible temporal variability. Cell-level \\(R^2\\) and nNSE are:

$$
R^2_{ij} = 1 - \frac{\mathrm{MSE}_{ij}}{\mathrm{Var}_t(gt_{ij})}, \qquad \mathrm{nNSE}_{ij} = \frac{R^2_{ij}}{2 - R^2_{ij}}
$$

For each AR6 land region \\(k\\), the area-weighted regional score is:

$$
\mathrm{nNSE}_{kv} = \frac{\sum_{(i,j)\in k \cap \mathcal{V}} \cos\phi_i \, \mathrm{nNSE}_{ij}}{\sum_{(i,j)\in k \cap \mathcal{V}} \cos\phi_i}
$$

The final score averages uniformly over valid regions and indices:

$$
S = \frac{1}{|V|} \sum_{v \in V} \frac{1}{|K_v|} \sum_{k \in K_v} \mathrm{nNSE}_{kv}
$$

\\(S=1\\) is perfect agreement, \\(S=0\\) corresponds to a mean predictor, and \\(S \to -1\\) is pathological.

## How to load the data

This dataset is distributed as **NetCDF-4** files. There are two common ways to load it.

### Option 1 (recommended): clone the ClimX code and use the helper loader

The ClimX repository already includes a helper module (`src/data/climx_hf.py`) that allows you to download the dataset from Hugging Face and open it as three lazily-loaded “virtual” xarray datasets:

```bash
git clone https://github.com/IPL-UV/ClimX.git
cd ClimX
pip install -U "huggingface-hub" xarray netcdf4 dask
```

```python
from src.data.climx_hf import download_climx_from_hf, open_climx_virtual_datasets

# Download NetCDF artifacts from HF into a local cache directory.
root = download_climx_from_hf("/path/to/hf_cache", variant="full")

# Open as three virtual datasets (lazy / dask-friendly).
ds = open_climx_virtual_datasets(root, variant="full") # or "lite"

ds.hist # historical (targets + forcings)
ds.train # projections training scenarios (targets + forcings; excludes `ssp245` scenario)
ds.test_forcings # `ssp245` scenario forcings only (no targets)
```

### Option 2: download NetCDFs and open with xarray directly

You can also download files from Hugging Face and open them with **xarray**.

Example:

```python
from huggingface_hub import hf_hub_download
import xarray as xr

path = hf_hub_download(
    repo_id="isp-uv-es/ClimX",
    repo_type="dataset",
    filename="PATH/TO/A/FILE.nc",  # replace with an actual file in this dataset repo
)
ds = xr.open_dataset(path)
print(ds)
```

## Links

- [Kaggle competition](https://www.kaggle.com/competitions/climx)
- [Full dataset (this page)](https://huggingface.co/datasets/isp-uv-es/ClimX)
- [Public code repository (challenge materials)](https://github.com/IPL-UV/ClimX)
- [Website](https://ipl-uv.github.io/ClimX/)

## License and usage

The dataset is released under **MIT**. In addition, if you are participating in the ClimX competition, please follow the competition rules (notably: restrictions on external climate training data and redistribution of competition data).