Fahad Alghanim commited on
Commit
9857bf2
·
1 Parent(s): e7d3717

Add Pacific MUR SST ML subset

Browse files

Publish Pacific (20-50N, 180-240E) 2018-2019 analysed_sst subset as a weekly-chunked Zarr tar, plus PyTorch loader and throughput benchmark.

.gitattributes CHANGED
@@ -57,3 +57,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
 
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
+ pacific_sst.zarr/** filter=lfs diff=lfs merge=lfs -text
61
+ pacific_sst.zarr.tar filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # MUR SST (Pacific) — ML Benchmark Zarr
2
+
3
+ This folder contains a **machine-learning friendly Zarr subset** of the NASA/JPL GHRSST **MUR SST** product.
4
+
5
+ - **Upstream source (public, no auth)**: `s3://mur-sst/zarr`
6
+ - **Subset**:
7
+ - **Region**: Pacific, lat \(20^\circ\text{N}\)–\(50^\circ\text{N}\), lon \(180^\circ\text{E}\)–\(240^\circ\text{E}\) (stored as 0–360)
8
+ - **Time**: 2018-01-01 → 2019-12-30 (729 daily frames)
9
+ - **Variable**: `analysed_sst` only (**float32, °C**)
10
+ - **Chunking for ML**: `(time, lat, lon) = (7, 256, 256)` (weekly windows)
11
+
12
+ Why `mur-sst/zarr-v1` during extraction?
13
+ - `mur-sst/zarr` is chunked with the **entire time axis in one chunk**, making time subsetting extremely inefficient.
14
+ - `mur-sst/zarr-v1` is time-chunked and enables practical extraction. The output dataset here is the requested ML rechunk.
15
+
16
+ ## Files in this dataset repo
17
+
18
+ Because Hugging Face dataset repos + Git LFS handle a **single large file** much more reliably than tens of thousands of tiny chunk files, the Zarr store is published as:
19
+
20
+ - `pacific_sst.zarr.tar` (a tar archive of the `pacific_sst.zarr/` directory)
21
+
22
+ To use it locally:
23
+
24
+ ```bash
25
+ tar -xf pacific_sst.zarr.tar
26
+ ```
27
+
28
+ ## SST forecasting task definition
29
+
30
+ We define a next-week forecasting task:
31
+
32
+ - **Input**: 7 daily SST frames \(X_t \in \mathbb{R}^{7 \times H \times W}\)
33
+ - **Target**: next 7 daily SST frames \(Y_t \in \mathbb{R}^{7 \times H \times W}\)
34
+ - **Goal**: learn \(f_\theta(X_t) \approx Y_t\)
35
+
36
+ Windows are created from the `time` axis; you can use overlapping or non-overlapping windows (benchmark scripts default to non-overlapping).
37
+
38
+ ## Train/val/test splits
39
+
40
+ Time-contiguous splits (no leakage):
41
+
42
+ - **Train**: 2018-01-01 → 2018-12-30
43
+ - **Val**: 2018-12-31 → 2019-06-30
44
+ - **Test**: 2019-07-01 → 2019-12-30
45
+
46
+ ## Streaming code example
47
+
48
+ Local:
49
+
50
+ ```python
51
+ import xarray as xr
52
+ ds = xr.open_zarr("pacific_sst.zarr", consolidated=True)
53
+ print(ds)
54
+ ```
55
+
56
+ Remote (Hugging Face, after download):
57
+
58
+ ```python
59
+ import xarray as xr
60
+
61
+ # 1) Download pacific_sst.zarr.tar from the Hub
62
+ # 2) tar -xf pacific_sst.zarr.tar
63
+ ds = xr.open_zarr("pacific_sst.zarr", consolidated=True)
64
+ print(ds)
65
+ ```
66
+
67
+ ## Benchmark results
68
+
69
+ Run:
70
+
71
+ ```bash
72
+ tar -xf pacific_sst.zarr.tar
73
+ tar -xf pacific_sst.zarr.tar
74
+ tar -xf pacific_sst.zarr.tar
75
+ tar -xf pacific_sst.zarr.tar
76
+ tar -xf pacific_sst.zarr.tar
77
+ tar -xf pacific_sst.zarr.tar
78
+ python bench/throughput_benchmark.py --local pacific_sst.zarr
79
+ ```
80
+
81
+ Then fill in the table below (the script prints a Markdown row you can paste here):
82
+
83
+ | mode | samples/sec | MB/sec | first_batch_sec |
84
+ |---|---:|---:|---:|
85
+ | local | 0.270 | 259.374 | 6.392 |
86
+ | streaming_hf | TODO | TODO | TODO |
87
+
bench/throughput_benchmark.py ADDED
@@ -0,0 +1,118 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import time
3
+ from dataclasses import dataclass
4
+
5
+ import numpy as np
6
+ import xarray as xr
7
+
8
+
9
+ @dataclass(frozen=True)
10
+ class BenchResult:
11
+ mode: str
12
+ samples_per_sec: float
13
+ mb_per_sec: float
14
+ first_batch_sec: float
15
+
16
+
17
+ def benchmark_local(zarr_path: str, *, n_samples: int = 16, in_days: int = 7, out_days: int = 7, seed: int = 0) -> BenchResult:
18
+ ds = xr.open_zarr(zarr_path, consolidated=True)
19
+ var = ds['analysed_sst']
20
+
21
+ rng = np.random.RandomState(seed)
22
+ max_start = int(ds.sizes['time']) - (in_days + out_days)
23
+ idxs = rng.randint(0, max_start + 1, size=n_samples)
24
+
25
+ t0 = time.time()
26
+ i0 = int(idxs[0])
27
+ x0 = np.asarray(var.isel(time=slice(i0, i0 + in_days)).values)
28
+ y0 = np.asarray(var.isel(time=slice(i0 + in_days, i0 + in_days + out_days)).values)
29
+ _ = float(x0.mean() + y0.mean())
30
+ first = time.time() - t0
31
+
32
+ t1 = time.time()
33
+ bytes_read = 0
34
+ for i in idxs:
35
+ i = int(i)
36
+ x = np.asarray(var.isel(time=slice(i, i + in_days)).values)
37
+ y = np.asarray(var.isel(time=slice(i + in_days, i + in_days + out_days)).values)
38
+ bytes_read += x.nbytes + y.nbytes
39
+ _ = float(x.mean() + y.mean())
40
+
41
+ dt = time.time() - t1
42
+ samples_per_sec = float(len(idxs) / max(dt, 1e-9))
43
+ mb_per_sec = float((bytes_read / (1024 * 1024)) / max(dt, 1e-9))
44
+ return BenchResult('local', samples_per_sec, mb_per_sec, first)
45
+
46
+
47
+ def benchmark_hf(repo_id: str, *, n_samples: int = 16, in_days: int = 7, out_days: int = 7, seed: int = 0) -> BenchResult:
48
+ """Benchmark streaming from HF using the hf:// fsspec protocol.
49
+
50
+ Note: requires a newer huggingface_hub/fsspec stack that provides hf://.
51
+ """
52
+ import fsspec
53
+
54
+ mapper = fsspec.get_mapper(f"hf://datasets/{repo_id}/pacific_sst.zarr")
55
+ ds = xr.open_zarr(mapper, consolidated=True)
56
+ var = ds['analysed_sst']
57
+
58
+ rng = np.random.RandomState(seed)
59
+ max_start = int(ds.sizes['time']) - (in_days + out_days)
60
+ idxs = rng.randint(0, max_start + 1, size=n_samples)
61
+
62
+ t0 = time.time()
63
+ i0 = int(idxs[0])
64
+ x0 = np.asarray(var.isel(time=slice(i0, i0 + in_days)).values)
65
+ y0 = np.asarray(var.isel(time=slice(i0 + in_days, i0 + in_days + out_days)).values)
66
+ _ = float(x0.mean() + y0.mean())
67
+ first = time.time() - t0
68
+
69
+ t1 = time.time()
70
+ bytes_read = 0
71
+ for i in idxs:
72
+ i = int(i)
73
+ x = np.asarray(var.isel(time=slice(i, i + in_days)).values)
74
+ y = np.asarray(var.isel(time=slice(i + in_days, i + in_days + out_days)).values)
75
+ bytes_read += x.nbytes + y.nbytes
76
+ _ = float(x.mean() + y.mean())
77
+
78
+ dt = time.time() - t1
79
+ samples_per_sec = float(len(idxs) / max(dt, 1e-9))
80
+ mb_per_sec = float((bytes_read / (1024 * 1024)) / max(dt, 1e-9))
81
+ return BenchResult('streaming_hf', samples_per_sec, mb_per_sec, first)
82
+
83
+
84
+ def main() -> None:
85
+ p = argparse.ArgumentParser()
86
+ p.add_argument('--local', help='Path to pacific_sst.zarr')
87
+ p.add_argument('--hf', help='HF dataset repo_id, e.g. KokosDev/mur-sst-ml-benchmark')
88
+ p.add_argument('--n-samples', type=int, default=16)
89
+ p.add_argument('--seed', type=int, default=0)
90
+ args = p.parse_args()
91
+
92
+ results = []
93
+ if args.local:
94
+ results.append(benchmark_local(args.local, n_samples=args.n_samples, seed=args.seed))
95
+ if args.hf:
96
+ try:
97
+ results.append(benchmark_hf(args.hf, n_samples=args.n_samples, seed=args.seed))
98
+ except Exception as e:
99
+ print('HF streaming benchmark failed:', repr(e))
100
+ print('Tip: install newer huggingface_hub + fsspec with hf:// support')
101
+ if not results:
102
+ raise SystemExit('Provide --local and/or --hf')
103
+
104
+ print('## Throughput benchmark')
105
+ for r in results:
106
+ print(f'- mode: {r.mode}')
107
+ print(f'- samples/sec: {r.samples_per_sec:.3f}')
108
+ print(f'- MB/sec: {r.mb_per_sec:.3f}')
109
+ print(f'- first_batch_sec: {r.first_batch_sec:.3f}')
110
+ print('')
111
+ print('| mode | samples/sec | MB/sec | first_batch_sec |')
112
+ print('|---|---:|---:|---:|')
113
+ for r in results:
114
+ print(f'| {r.mode} | {r.samples_per_sec:.3f} | {r.mb_per_sec:.3f} | {r.first_batch_sec:.3f} |')
115
+
116
+
117
+ if __name__ == '__main__':
118
+ main()
build_pacific_sst.py ADDED
@@ -0,0 +1,121 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import os
3
+ from typing import Dict
4
+
5
+ import numpy as np
6
+ import s3fs
7
+ import xarray as xr
8
+ import zarr
9
+
10
+
11
+ def _open_mur_zarr(source_root: str) -> xr.Dataset:
12
+ fs = s3fs.S3FileSystem(anon=True)
13
+ store = s3fs.S3Map(root=source_root, s3=fs, check=False)
14
+ return xr.open_zarr(
15
+ store,
16
+ consolidated=True,
17
+ decode_cf=True,
18
+ mask_and_scale=True,
19
+ decode_times=True,
20
+ # Keep reading lazy (dask) using source chunking.
21
+ chunks={},
22
+ )
23
+
24
+
25
+ def _lon_to_0_360(ds: xr.Dataset) -> xr.Dataset:
26
+ # Original lon is [-180, 180]. Convert to [0, 360) and reorder accordingly.
27
+ lon360 = (ds["lon"] % 360).astype("float32")
28
+ ds = ds.assign_coords(lon360=lon360).set_coords("lon360")
29
+ ds = ds.swap_dims({"lon": "lon360"}).sortby("lon360")
30
+ ds = ds.drop_vars("lon").rename({"lon360": "lon"})
31
+ ds["lon"].attrs.update({"units": "degrees_east", "standard_name": "longitude"})
32
+ return ds
33
+
34
+
35
+ def build_pacific_subset(
36
+ source_root: str,
37
+ out_path: str,
38
+ *,
39
+ lat_min: float = 20.0,
40
+ lat_max: float = 50.0,
41
+ lon_min_360: float = 180.0,
42
+ lon_max_360: float = 240.0,
43
+ time_start: str = "2018-01-01",
44
+ time_end: str = "2019-12-31",
45
+ rechunk: Dict[str, int] = None,
46
+ compressor_level: int = 3,
47
+ ) -> None:
48
+ ds = _open_mur_zarr(source_root)
49
+ ds = _lon_to_0_360(ds)
50
+
51
+ # Keep only analysed_sst, then subset.
52
+ ds = ds[["analysed_sst"]]
53
+ ds = ds.sel(
54
+ lat=slice(lat_min, lat_max),
55
+ lon=slice(lon_min_360, lon_max_360),
56
+ time=slice(np.datetime64(time_start), np.datetime64(time_end)),
57
+ )
58
+ print(
59
+ "subset dims:",
60
+ {k: int(v) for k, v in ds.sizes.items()},
61
+ "time:",
62
+ str(ds["time"].values[0]),
63
+ "->",
64
+ str(ds["time"].values[-1]),
65
+ flush=True,
66
+ )
67
+
68
+ # Convert to Celsius for ML convenience (keep variable name analysed_sst).
69
+ ds["analysed_sst"] = (ds["analysed_sst"] - 273.15).astype("float32")
70
+ ds["analysed_sst"].attrs.update(
71
+ {
72
+ "units": "celsius",
73
+ "comment": "Converted from kelvin using analysed_sst - 273.15",
74
+ }
75
+ )
76
+
77
+ if rechunk is None:
78
+ rechunk = {"time": 7, "lat": 256, "lon": 256}
79
+ ds = ds.chunk(rechunk)
80
+ print("target chunks:", ds["analysed_sst"].data.chunksize, flush=True)
81
+
82
+ compressor = zarr.Blosc(cname="zstd", clevel=int(compressor_level), shuffle=2)
83
+ encoding = {"analysed_sst": {"compressor": compressor, "dtype": "float32"}}
84
+
85
+ if os.path.exists(out_path):
86
+ raise FileExistsError(f"Refusing to overwrite existing path: {out_path}")
87
+
88
+ print(f"writing zarr -> {out_path}", flush=True)
89
+ ds.to_zarr(out_path, mode="w", consolidated=True, encoding=encoding)
90
+ print("done", flush=True)
91
+
92
+
93
+ def main() -> None:
94
+ p = argparse.ArgumentParser()
95
+ p.add_argument("--source-root", default="mur-sst/zarr-v1")
96
+ p.add_argument("--out", default="pacific_sst.zarr")
97
+ p.add_argument("--time-start", default="2018-01-01")
98
+ p.add_argument("--time-end", default="2019-12-31")
99
+ p.add_argument("--lat-min", type=float, default=20.0)
100
+ p.add_argument("--lat-max", type=float, default=50.0)
101
+ p.add_argument("--lon-min", type=float, default=180.0, help="Longitude min in 0..360")
102
+ p.add_argument("--lon-max", type=float, default=240.0, help="Longitude max in 0..360")
103
+ p.add_argument("--compressor-level", type=int, default=3)
104
+ args = p.parse_args()
105
+
106
+ build_pacific_subset(
107
+ args.source_root,
108
+ args.out,
109
+ lat_min=args.lat_min,
110
+ lat_max=args.lat_max,
111
+ lon_min_360=args.lon_min,
112
+ lon_max_360=args.lon_max,
113
+ time_start=args.time_start,
114
+ time_end=args.time_end,
115
+ compressor_level=args.compressor_level,
116
+ )
117
+
118
+
119
+ if __name__ == "__main__":
120
+ main()
121
+
examples/pytorch_dataloader.py ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ from dataclasses import dataclass
3
+ from typing import Tuple
4
+
5
+ import numpy as np
6
+ import torch
7
+ import xarray as xr
8
+ from torch.utils.data import DataLoader, Dataset
9
+
10
+
11
+ @dataclass(frozen=True)
12
+ class WindowSpec:
13
+ in_days: int = 7
14
+ out_days: int = 7
15
+ stride_days: int = 7 # non-overlapping by default
16
+
17
+
18
+ class PacificSSTForecastDataset(Dataset):
19
+ """Forecasting dataset over pacific_sst.zarr.
20
+
21
+ Returns:
22
+ x: (in_days, H, W) float32
23
+ y: (out_days, H, W) float32
24
+ """
25
+
26
+ def __init__(self, zarr_path: str, *, split: str, window: WindowSpec = WindowSpec()) -> None:
27
+ self.ds = xr.open_zarr(zarr_path, consolidated=True)
28
+ self.var = self.ds["analysed_sst"]
29
+ self.window = window
30
+
31
+ t = self.ds["time"].values
32
+ if t.dtype.kind != "M":
33
+ raise ValueError("Expected datetime64 time coordinate")
34
+
35
+ train_end = np.datetime64("2018-12-30T09:00:00")
36
+ val_end = np.datetime64("2019-06-30T09:00:00")
37
+
38
+ if split == "train":
39
+ t0 = 0
40
+ t1 = int(np.searchsorted(t, train_end, side="right")) - 1
41
+ elif split == "val":
42
+ t0 = int(np.searchsorted(t, train_end, side="right"))
43
+ t1 = int(np.searchsorted(t, val_end, side="right")) - 1
44
+ elif split == "test":
45
+ t0 = int(np.searchsorted(t, val_end, side="right"))
46
+ t1 = len(t) - 1
47
+ else:
48
+ raise ValueError("split must be one of: train, val, test")
49
+
50
+ n = window.in_days + window.out_days
51
+ self.start_indices = list(range(t0, t1 - n + 2, window.stride_days))
52
+
53
+ def __len__(self) -> int:
54
+ return len(self.start_indices)
55
+
56
+ def __getitem__(self, idx: int) -> Tuple[torch.Tensor, torch.Tensor]:
57
+ i = self.start_indices[idx]
58
+ x = self.var.isel(time=slice(i, i + self.window.in_days)).values.astype(np.float32)
59
+ y = self.var.isel(
60
+ time=slice(i + self.window.in_days, i + self.window.in_days + self.window.out_days)
61
+ ).values.astype(np.float32)
62
+ return torch.from_numpy(x), torch.from_numpy(y)
63
+
64
+
65
+ def main() -> None:
66
+ p = argparse.ArgumentParser()
67
+ p.add_argument("--zarr", default="pacific_sst.zarr")
68
+ p.add_argument("--split", default="train", choices=["train", "val", "test"])
69
+ p.add_argument("--batch-size", type=int, default=1)
70
+ p.add_argument("--num-workers", type=int, default=0)
71
+ args = p.parse_args()
72
+
73
+ ds = PacificSSTForecastDataset(args.zarr, split=args.split)
74
+ dl = DataLoader(ds, batch_size=args.batch_size, shuffle=(args.split == "train"), num_workers=args.num_workers)
75
+
76
+ x, y = next(iter(dl))
77
+ print("x:", tuple(x.shape), x.dtype, "y:", tuple(y.shape), y.dtype)
78
+
79
+
80
+ if __name__ == "__main__":
81
+ main()
pacific_sst.zarr.tar ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8679230067e7e59a8efa156f0b1449e014360e5787de55edeb539f82bee65dfd
3
+ size 20674529280