Datasets:
license: cc0-1.0
task_categories:
- time-series-forecasting
tags:
- climate
- precipitation
- cmorph
- noaa
- virtualizarr
- kerchunk
- zarr
- icechunk
- east-africa
- geospatial
size_categories:
- 100K<n<1M
CMORPH VirtualiZarr Parquet Catalog (1998-2024)
Dataset Overview
This repository contains a Parquet-based virtual dataset (VDS) catalog for the NOAA CMORPH (CPC MORPHing technique) global precipitation dataset, hosted on AWS S3 at s3://noaa-cdr-precip-cmorph-pds/.
The catalog was built using VirtualiZarr and Kerchunk to create a single-file index of 236,688 NetCDF files spanning January 1998 to October 2024, enabling cloud-native access to the entire CMORPH archive without downloading or converting the original files.
| Property | Value |
|---|---|
| File | cmorph-aws-s3-1998-2024.parquet |
| Size | 223 MB (zstd compressed) |
| Rows | 236,688 (100% success) |
| Time range | 1998-01-17 to 2024-10-14 |
| Years | 27 (1998-2024) |
| Unique months | 324 |
| Temporal resolution | 30-minute (half-hourly) |
| Spatial resolution | 8 km (~0.073 deg) |
| Spatial coverage | Global |
| Variable | cmorph — precipitation rate (mm/hr) |
| Source bucket | s3://noaa-cdr-precip-cmorph-pds/ (public, no auth required) |
Parquet Schema
Each row represents one CMORPH NetCDF file:
| Column | Type | Description |
|---|---|---|
s3_url |
string | Full S3 path (e.g., s3://noaa-cdr-precip-cmorph-pds/data/30min/8km/2020/01/01/CMORPH_V1.0_ADJ_8km-30min_2020010100.nc) |
filename |
string | NetCDF filename |
datetime |
timestamp | Parsed timestamp (half-hour slot decoded) |
year |
int32 | Year |
month |
int32 | Month |
day |
int32 | Day |
hour |
int32 | Hour (0-23) |
minute |
int32 | Minute (0 or 30) |
month_key |
string | Year-month key (e.g., 2020-01) |
status |
string | Virtualization status (success or error: ...) |
kerchunk_refs |
string | Kerchunk JSON references (~84 KB per row) — byte ranges, codecs, array metadata for cloud-native reads |
How It Was Created
The catalog was built by cmorph_parquet_vds_catalog.py using:
- File discovery —
fsspeclists all*.ncfiles on S3 for the requested year range - Distributed virtualization — Coiled workers run
VirtualiZarr.open_virtual_dataset()on batches of 100 files, extracting Kerchunk reference metadata (byte offsets, chunk shapes, codecs) without downloading the full data - Streaming Parquet write — A PyArrow
ParquetWriterstreams each completed batch to a single zstd-compressed Parquet file, keeping coordinator memory constant
# Full catalog build (requires Coiled account)
micromamba run -n aifs-etl python cmorph_parquet_vds_catalog.py catalog \
--start-year 1998 --end-year 2024 --n-workers 10
# Lite mode (listing only, no Coiled)
micromamba run -n aifs-etl python cmorph_parquet_vds_catalog.py catalog \
--start-year 1998 --end-year 2024 --lite
# Inspect catalog stats
micromamba run -n aifs-etl python cmorph_parquet_vds_catalog.py info \
--catalog cmorph-aws-s3-1998-2024.parquet
How to Open the Dataset
1. Read the Parquet catalog from Hugging Face
import pandas as pd
HF_PARQUET = "hf://datasets/E4DRR/virtualizarr-stores/cmorph-aws-s3-1998-2024.parquet"
catalog = pd.read_parquet(
HF_PARQUET,
columns=["s3_url", "datetime", "year", "month_key", "status"], # skip kerchunk_refs for fast loads
)
print(f"Files: {len(catalog)}")
print(f"Range: {catalog['datetime'].min()} to {catalog['datetime'].max()}")
2. Open a single file via Kerchunk refs (zero download)
import json
import fsspec
import zarr
import xarray as xr
import pyarrow.parquet as pq
HF_PARQUET = "hf://datasets/E4DRR/virtualizarr-stores/cmorph-aws-s3-1998-2024.parquet"
# Read one row's kerchunk refs from Hugging Face (memory-efficient iter_batches)
with fsspec.open(HF_PARQUET, "rb") as f:
pf = pq.ParquetFile(f)
for batch in pf.iter_batches(batch_size=1, columns=["kerchunk_refs", "status"]):
row = batch.to_pydict()
if row["status"][0] == "success":
refs = json.loads(row["kerchunk_refs"][0])
break
# Open via fsspec reference filesystem → zarr.storage.FsspecStore (Zarr v3 compatible)
fs = fsspec.filesystem("reference", fo=refs, remote_protocol="s3", remote_options={"anon": True})
store = zarr.storage.FsspecStore(fs, read_only=True)
ds = xr.open_dataset(store, engine="zarr", consolidated=False, zarr_format=2)
print(ds)
3. Open a regional subset with the Icechunk pipeline
The companion script cmorph_east_africa_icechunk.py materializes an East Africa subset (lat: -12 to 23, lon: 21 to 53) into an Icechunk versioned store on GCS, then rechunks to "pencil" chunks (full time x 5 lat x 5 lon) for fast time-series access:
# Step 1: Create empty template store
python cmorph_east_africa_icechunk.py init \
--catalog cmorph_vds_catalog/catalog.parquet \
--gcs-prefix cmorph_ea_subset
# Step 2: Fill with real data via Coiled workers reading from S3
python cmorph_east_africa_icechunk.py fill \
--catalog cmorph_vds_catalog/catalog.parquet \
--target-gcs-prefix cmorph_ea_subset --n-workers 20
# Step 3: Rechunk to pencil chunks (Dask P2P shuffle)
python cmorph_east_africa_icechunk.py rechunk \
--source-gcs-prefix cmorph_ea_subset \
--target-path gs://cpc_awc/cmorph_ea_pencil --n-workers 20
# Step 4: Verify
python cmorph_east_africa_icechunk.py verify \
--gcs-prefix cmorph_ea_pencil --store-type zarr
4. Filter the catalog by time range
import pandas as pd
HF_PARQUET = "hf://datasets/E4DRR/virtualizarr-stores/cmorph-aws-s3-1998-2024.parquet"
# Load only lightweight columns (fast — skips 223 MB of kerchunk_refs)
df = pd.read_parquet(
HF_PARQUET,
columns=["s3_url", "datetime", "year", "month", "day"],
filters=[("year", ">=", 2020), ("year", "<=", 2023)],
)
print(f"2020-2023 files: {len(df)}")
Architecture
NOAA S3 (public) Parquet Catalog Icechunk Store (GCS)
┌──────────────────┐ VirtualiZarr ┌──────────────────┐ materialize ┌──────────────────┐
│ 236,688 NetCDF │ ──────────────────>│ cmorph-aws-s3- │ ───────────────> │ EA Subset │
│ files (8km, │ Coiled workers │ 1998-2024 │ Coiled + S3 │ (Icechunk repo) │
│ 30-min, global) │ + Kerchunk refs │ .parquet │ direct reads │ lat:-12..23 │
└──────────────────┘ │ (223 MB) │ │ lon: 21..53 │
└──────────────────┘ └────────┬─────────┘
│ rechunk
┌────────▼─────────┐
│ Pencil Zarr │
│ (full-time x │
│ 5lat x 5lon) │
└──────────────────┘
Dependencies
- Python 3.10+
virtualizarr,kerchunk,fsspec,obstorepandas,pyarrow,xarray,zarricechunk(for the East Africa materialized store)coiled,dask.distributed(for distributed processing)pystac,stac-geoparquet(for STAC integration — future)
Related Scripts
| Script | Purpose | Link |
|---|---|---|
cmorph_parquet_vds_catalog.py |
Build the Parquet VDS catalog from S3 | GitHub |
cmorph_east_africa_icechunk.py |
Materialize EA subset + pencil rechunk | GitHub |
License
The CMORPH data is produced by NOAA's Climate Prediction Center and is in the public domain. The processing scripts and catalog are part of the ICPAC IGAD IBF Thresholds & Triggers project.