Datasets:
The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 82, in _split_generators
raise ValueError(
ValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
MUR SST ML Benchmark (Pacific, Zarr)
Machine-learning friendly Zarr subset of NASA/JPL GHRSST MUR SST.
- Upstream source (public, no auth):
s3://mur-sst/zarr - Subset:
- Region: Pacific (20–50°N, 180–240°E); longitude is stored as 0–360°E
- Time: 2018-01-01 → 2019-12-30 (729 daily frames; upstream coverage for this slice ends on 2019-12-30)
- Variable:
analysed_sstonly (float32, °C)
- Chunking for ML:
(time, lat, lon) = (7, 256, 256)(weekly windows)
Why mur-sst/zarr-v1 during extraction?
mur-sst/zarris chunked with the entire time axis in one chunk, making time subsetting extremely inefficient.mur-sst/zarr-v1is time-chunked and enables practical extraction. The output dataset here is the requested ML rechunk.
Notes (Hub viewer)
- The Dataset Viewer is expected to be unavailable because this repo contains a tar archive of a Zarr store (not a
datasets-native format with named splits).
Files in this dataset repo
Because Hugging Face dataset repos + Git LFS handle a single large file much more reliably than tens of thousands of tiny chunk files, the Zarr store is published as:
pacific_sst.zarr.tar(a tar archive of thepacific_sst.zarr/directory)
To use it locally:
tar -xf pacific_sst.zarr.tar
SST forecasting task definition
We define a next-week forecasting task:
- Input: 7 daily SST frames, shape
(7, H, W) - Target: next 7 daily SST frames, shape
(7, H, W) - Goal: learn a function that predicts the next week from the previous week
Windows are created from the time axis; you can use overlapping or non-overlapping windows (benchmark scripts default to non-overlapping).
Train/val/test splits
Time-contiguous splits (no leakage):
- Train: 2018-01-01 → 2018-12-30
- Val: 2018-12-31 → 2019-06-30
- Test: 2019-07-01 → 2019-12-30
Streaming code example
Local:
import xarray as xr
ds = xr.open_zarr("pacific_sst.zarr", consolidated=True)
print(ds)
Remote (Hugging Face, after download):
import xarray as xr
# 1) Download pacific_sst.zarr.tar from the Hub
# 2) tar -xf pacific_sst.zarr.tar
ds = xr.open_zarr("pacific_sst.zarr", consolidated=True)
print(ds)
Benchmark results
Run:
tar -xf pacific_sst.zarr.tar
python bench/throughput_benchmark.py --local pacific_sst.zarr --s3-root mur-sst/zarr-v1
Measured on this machine (see bench/throughput_benchmark.py for details):
| mode | samples/sec | MB/sec | first_batch_sec |
|---|---|---|---|
| local | 0.366 | 351.922 | 3.598 |
| streaming_s3 | 0.109 | 104.646 | 9.505 |
- Downloads last month
- 13