Dataset Viewer
Auto-converted to Parquet Duplicate
s3_url
stringlengths
97
97
filename
stringlengths
39
39
datetime
timestamp[us]date
1998-01-01 00:00:00
2024-12-31 11:30:00
year
int32
2k
2.02k
month
int32
1
12
day
int32
1
31
hour
int32
0
11
minute
int32
0
30
month_key
stringdate
1998-01-01 00:00:00
2024-12-01 00:00:00
status
stringclasses
1 value
kerchunk_refs
stringlengths
80.7k
84.2k
s3://noaa-cdr-precip-cmorph-pds/data/30min/8km/1998/01/17/CMORPH_V1.0_ADJ_8km-30min_1998011716.nc
CMORPH_V1.0_ADJ_8km-30min_1998011716.nc
1998-01-17T08:00:00
1,998
1
17
8
0
1998-01
success
{"version": 1, "refs": {".zgroup": "{\"zarr_format\":2}", ".zattrs": "{\"ncei_template_version\":\"NCEI_NetCDF_Grid_template_V2.0\",\"title\":\"NOAA Climate Data Record (CDR) of CPC Morphing Technique (CMORPH) High-Resolution Global Precipitation Estimates\",\"keywords\":\"Precipitation, Satellite, High-Resolution, Glo...
s3://noaa-cdr-precip-cmorph-pds/data/30min/8km/1998/01/17/CMORPH_V1.0_ADJ_8km-30min_1998011717.nc
CMORPH_V1.0_ADJ_8km-30min_1998011717.nc
1998-01-17T08:30:00
1,998
1
17
8
30
1998-01
success
{"version": 1, "refs": {".zgroup": "{\"zarr_format\":2}", ".zattrs": "{\"ncei_template_version\":\"NCEI_NetCDF_Grid_template_V2.0\",\"title\":\"NOAA Climate Data Record (CDR) of CPC Morphing Technique (CMORPH) High-Resolution Global Precipitation Estimates\",\"keywords\":\"Precipitation, Satellite, High-Resolution, Glo...
s3://noaa-cdr-precip-cmorph-pds/data/30min/8km/1998/01/17/CMORPH_V1.0_ADJ_8km-30min_1998011718.nc
CMORPH_V1.0_ADJ_8km-30min_1998011718.nc
1998-01-17T09:00:00
1,998
1
17
9
0
1998-01
success
"{\"version\": 1, \"refs\": {\".zgroup\": \"{\\\"zarr_format\\\":2}\", \".zattrs\": \"{\\\"ncei_temp(...TRUNCATED)
s3://noaa-cdr-precip-cmorph-pds/data/30min/8km/1998/01/17/CMORPH_V1.0_ADJ_8km-30min_1998011719.nc
CMORPH_V1.0_ADJ_8km-30min_1998011719.nc
1998-01-17T09:30:00
1,998
1
17
9
30
1998-01
success
"{\"version\": 1, \"refs\": {\".zgroup\": \"{\\\"zarr_format\\\":2}\", \".zattrs\": \"{\\\"ncei_temp(...TRUNCATED)
s3://noaa-cdr-precip-cmorph-pds/data/30min/8km/1998/01/17/CMORPH_V1.0_ADJ_8km-30min_1998011720.nc
CMORPH_V1.0_ADJ_8km-30min_1998011720.nc
1998-01-17T10:00:00
1,998
1
17
10
0
1998-01
success
"{\"version\": 1, \"refs\": {\".zgroup\": \"{\\\"zarr_format\\\":2}\", \".zattrs\": \"{\\\"ncei_temp(...TRUNCATED)
s3://noaa-cdr-precip-cmorph-pds/data/30min/8km/1998/01/17/CMORPH_V1.0_ADJ_8km-30min_1998011721.nc
CMORPH_V1.0_ADJ_8km-30min_1998011721.nc
1998-01-17T10:30:00
1,998
1
17
10
30
1998-01
success
"{\"version\": 1, \"refs\": {\".zgroup\": \"{\\\"zarr_format\\\":2}\", \".zattrs\": \"{\\\"ncei_temp(...TRUNCATED)
s3://noaa-cdr-precip-cmorph-pds/data/30min/8km/1998/01/17/CMORPH_V1.0_ADJ_8km-30min_1998011722.nc
CMORPH_V1.0_ADJ_8km-30min_1998011722.nc
1998-01-17T11:00:00
1,998
1
17
11
0
1998-01
success
"{\"version\": 1, \"refs\": {\".zgroup\": \"{\\\"zarr_format\\\":2}\", \".zattrs\": \"{\\\"ncei_temp(...TRUNCATED)
s3://noaa-cdr-precip-cmorph-pds/data/30min/8km/1998/01/17/CMORPH_V1.0_ADJ_8km-30min_1998011723.nc
CMORPH_V1.0_ADJ_8km-30min_1998011723.nc
1998-01-17T11:30:00
1,998
1
17
11
30
1998-01
success
"{\"version\": 1, \"refs\": {\".zgroup\": \"{\\\"zarr_format\\\":2}\", \".zattrs\": \"{\\\"ncei_temp(...TRUNCATED)
s3://noaa-cdr-precip-cmorph-pds/data/30min/8km/1998/01/18/CMORPH_V1.0_ADJ_8km-30min_1998011800.nc
CMORPH_V1.0_ADJ_8km-30min_1998011800.nc
1998-01-18T00:00:00
1,998
1
18
0
0
1998-01
success
"{\"version\": 1, \"refs\": {\".zgroup\": \"{\\\"zarr_format\\\":2}\", \".zattrs\": \"{\\\"ncei_temp(...TRUNCATED)
s3://noaa-cdr-precip-cmorph-pds/data/30min/8km/1998/01/18/CMORPH_V1.0_ADJ_8km-30min_1998011801.nc
CMORPH_V1.0_ADJ_8km-30min_1998011801.nc
1998-01-18T00:30:00
1,998
1
18
0
30
1998-01
success
"{\"version\": 1, \"refs\": {\".zgroup\": \"{\\\"zarr_format\\\":2}\", \".zattrs\": \"{\\\"ncei_temp(...TRUNCATED)
End of preview. Expand in Data Studio

CMORPH VirtualiZarr Parquet Catalog (1998-2024)

Dataset Overview

This repository contains a Parquet-based virtual dataset (VDS) catalog for the NOAA CMORPH (CPC MORPHing technique) global precipitation dataset, hosted on AWS S3 at s3://noaa-cdr-precip-cmorph-pds/.

The catalog was built using VirtualiZarr and Kerchunk to create a single-file index of 236,688 NetCDF files spanning January 1998 to October 2024, enabling cloud-native access to the entire CMORPH archive without downloading or converting the original files.

Property Value
File cmorph-aws-s3-1998-2024.parquet
Size 223 MB (zstd compressed)
Rows 236,688 (100% success)
Time range 1998-01-17 to 2024-10-14
Years 27 (1998-2024)
Unique months 324
Temporal resolution 30-minute (half-hourly)
Spatial resolution 8 km (~0.073 deg)
Spatial coverage Global
Variable cmorph β€” precipitation rate (mm/hr)
Source bucket s3://noaa-cdr-precip-cmorph-pds/ (public, no auth required)

Parquet Schema

Each row represents one CMORPH NetCDF file:

Column Type Description
s3_url string Full S3 path (e.g., s3://noaa-cdr-precip-cmorph-pds/data/30min/8km/2020/01/01/CMORPH_V1.0_ADJ_8km-30min_2020010100.nc)
filename string NetCDF filename
datetime timestamp Parsed timestamp (half-hour slot decoded)
year int32 Year
month int32 Month
day int32 Day
hour int32 Hour (0-23)
minute int32 Minute (0 or 30)
month_key string Year-month key (e.g., 2020-01)
status string Virtualization status (success or error: ...)
kerchunk_refs string Kerchunk JSON references (~84 KB per row) β€” byte ranges, codecs, array metadata for cloud-native reads

How It Was Created

The catalog was built by cmorph_parquet_vds_catalog.py using:

  1. File discovery β€” fsspec lists all *.nc files on S3 for the requested year range
  2. Distributed virtualization β€” Coiled workers run VirtualiZarr.open_virtual_dataset() on batches of 100 files, extracting Kerchunk reference metadata (byte offsets, chunk shapes, codecs) without downloading the full data
  3. Streaming Parquet write β€” A PyArrow ParquetWriter streams each completed batch to a single zstd-compressed Parquet file, keeping coordinator memory constant
# Full catalog build (requires Coiled account)
micromamba run -n aifs-etl python cmorph_parquet_vds_catalog.py catalog \
    --start-year 1998 --end-year 2024 --n-workers 10

# Lite mode (listing only, no Coiled)
micromamba run -n aifs-etl python cmorph_parquet_vds_catalog.py catalog \
    --start-year 1998 --end-year 2024 --lite

# Inspect catalog stats
micromamba run -n aifs-etl python cmorph_parquet_vds_catalog.py info \
    --catalog cmorph-aws-s3-1998-2024.parquet

How to Open the Dataset

1. Read the Parquet catalog from Hugging Face

import pandas as pd

HF_PARQUET = "hf://datasets/E4DRR/virtualizarr-stores/cmorph-aws-s3-1998-2024.parquet"

catalog = pd.read_parquet(
    HF_PARQUET,
    columns=["s3_url", "datetime", "year", "month_key", "status"],  # skip kerchunk_refs for fast loads
)
print(f"Files: {len(catalog)}")
print(f"Range: {catalog['datetime'].min()} to {catalog['datetime'].max()}")

2. Open a single file via Kerchunk refs (zero download)

import json
import fsspec
import zarr
import xarray as xr
import pyarrow.parquet as pq

HF_PARQUET = "hf://datasets/E4DRR/virtualizarr-stores/cmorph-aws-s3-1998-2024.parquet"

# Read one row's kerchunk refs from Hugging Face (memory-efficient iter_batches)
with fsspec.open(HF_PARQUET, "rb") as f:
    pf = pq.ParquetFile(f)
    for batch in pf.iter_batches(batch_size=1, columns=["kerchunk_refs", "status"]):
        row = batch.to_pydict()
        if row["status"][0] == "success":
            refs = json.loads(row["kerchunk_refs"][0])
            break

# Open via fsspec reference filesystem β†’ zarr.storage.FsspecStore (Zarr v3 compatible)
fs = fsspec.filesystem("reference", fo=refs, remote_protocol="s3", remote_options={"anon": True})
store = zarr.storage.FsspecStore(fs, read_only=True)
ds = xr.open_dataset(store, engine="zarr", consolidated=False, zarr_format=2)
print(ds)

3. Open a regional subset with the Icechunk pipeline

The companion script cmorph_east_africa_icechunk.py materializes an East Africa subset (lat: -12 to 23, lon: 21 to 53) into an Icechunk versioned store on GCS, then rechunks to "pencil" chunks (full time x 5 lat x 5 lon) for fast time-series access:

# Step 1: Create empty template store
python cmorph_east_africa_icechunk.py init \
    --catalog cmorph_vds_catalog/catalog.parquet \
    --gcs-prefix cmorph_ea_subset

# Step 2: Fill with real data via Coiled workers reading from S3
python cmorph_east_africa_icechunk.py fill \
    --catalog cmorph_vds_catalog/catalog.parquet \
    --target-gcs-prefix cmorph_ea_subset --n-workers 20

# Step 3: Rechunk to pencil chunks (Dask P2P shuffle)
python cmorph_east_africa_icechunk.py rechunk \
    --source-gcs-prefix cmorph_ea_subset \
    --target-path gs://cpc_awc/cmorph_ea_pencil --n-workers 20

# Step 4: Verify
python cmorph_east_africa_icechunk.py verify \
    --gcs-prefix cmorph_ea_pencil --store-type zarr

4. Filter the catalog by time range

import pandas as pd

HF_PARQUET = "hf://datasets/E4DRR/virtualizarr-stores/cmorph-aws-s3-1998-2024.parquet"

# Load only lightweight columns (fast β€” skips 223 MB of kerchunk_refs)
df = pd.read_parquet(
    HF_PARQUET,
    columns=["s3_url", "datetime", "year", "month", "day"],
    filters=[("year", ">=", 2020), ("year", "<=", 2023)],
)
print(f"2020-2023 files: {len(df)}")

Architecture

NOAA S3 (public)                    Parquet Catalog                  Icechunk Store (GCS)
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    VirtualiZarr    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”    materialize    β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  236,688 NetCDF  β”‚ ──────────────────>β”‚  cmorph-aws-s3-  β”‚ ───────────────> β”‚  EA Subset       β”‚
β”‚  files (8km,     β”‚   Coiled workers   β”‚  1998-2024       β”‚   Coiled + S3    β”‚  (Icechunk repo) β”‚
β”‚  30-min, global) β”‚   + Kerchunk refs  β”‚  .parquet        β”‚   direct reads   β”‚  lat:-12..23     β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                    β”‚  (223 MB)        β”‚                  β”‚  lon: 21..53     β”‚
                                        β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                  β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                                                                                       β”‚ rechunk
                                                                              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”
                                                                              β”‚  Pencil Zarr     β”‚
                                                                              β”‚  (full-time x    β”‚
                                                                              β”‚   5lat x 5lon)   β”‚
                                                                              β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

Dependencies

  • Python 3.10+
  • virtualizarr, kerchunk, fsspec, obstore
  • pandas, pyarrow, xarray, zarr
  • icechunk (for the East Africa materialized store)
  • coiled, dask.distributed (for distributed processing)
  • pystac, stac-geoparquet (for STAC integration β€” future)

Related Scripts

Script Purpose Link
cmorph_parquet_vds_catalog.py Build the Parquet VDS catalog from S3 GitHub
cmorph_east_africa_icechunk.py Materialize EA subset + pencil rechunk GitHub

License

The CMORPH data is produced by NOAA's Climate Prediction Center and is in the public domain. The processing scripts and catalog are part of the ICPAC IGAD IBF Thresholds & Triggers project.

Downloads last month
10