E4DRR commited on
Commit
0341a0c
·
verified ·
1 Parent(s): 8eb85d6

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +175 -3
README.md CHANGED
@@ -1,3 +1,175 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CMORPH VirtualiZarr Parquet Catalog (1998-2024)
2
+
3
+ ## Dataset Overview
4
+
5
+ This repository contains a **Parquet-based virtual dataset (VDS) catalog** for the [NOAA CMORPH](https://www.ncei.noaa.gov/products/climate-data-records/precipitation-cmorph) (CPC MORPHing technique) global precipitation dataset, hosted on AWS S3 at `s3://noaa-cdr-precip-cmorph-pds/`.
6
+
7
+ The catalog was built using [VirtualiZarr](https://github.com/zarr-developers/VirtualiZarr) and [Kerchunk](https://github.com/fsspec/kerchunk) to create a single-file index of **236,688 NetCDF files** spanning **January 1998 to October 2024**, enabling cloud-native access to the entire CMORPH archive without downloading or converting the original files.
8
+
9
+ | Property | Value |
10
+ |---|---|
11
+ | **File** | `cmorph-aws-s3-1998-2024.parquet` |
12
+ | **Size** | 223 MB (zstd compressed) |
13
+ | **Rows** | 236,688 (100% success) |
14
+ | **Time range** | 1998-01-17 to 2024-10-14 |
15
+ | **Years** | 27 (1998-2024) |
16
+ | **Unique months** | 324 |
17
+ | **Temporal resolution** | 30-minute (half-hourly) |
18
+ | **Spatial resolution** | 8 km (~0.073 deg) |
19
+ | **Spatial coverage** | Global |
20
+ | **Variable** | `cmorph` — precipitation rate (mm/hr) |
21
+ | **Source bucket** | `s3://noaa-cdr-precip-cmorph-pds/` (public, no auth required) |
22
+
23
+ ### Parquet Schema
24
+
25
+ Each row represents one CMORPH NetCDF file:
26
+
27
+ | Column | Type | Description |
28
+ |---|---|---|
29
+ | `s3_url` | string | Full S3 path (e.g., `s3://noaa-cdr-precip-cmorph-pds/data/30min/8km/2020/01/01/CMORPH_V1.0_ADJ_8km-30min_2020010100.nc`) |
30
+ | `filename` | string | NetCDF filename |
31
+ | `datetime` | timestamp | Parsed timestamp (half-hour slot decoded) |
32
+ | `year` | int32 | Year |
33
+ | `month` | int32 | Month |
34
+ | `day` | int32 | Day |
35
+ | `hour` | int32 | Hour (0-23) |
36
+ | `minute` | int32 | Minute (0 or 30) |
37
+ | `month_key` | string | Year-month key (e.g., `2020-01`) |
38
+ | `status` | string | Virtualization status (`success` or `error: ...`) |
39
+ | `kerchunk_refs` | string | Kerchunk JSON references (~84 KB per row) — byte ranges, codecs, array metadata for cloud-native reads |
40
+
41
+ ## How It Was Created
42
+
43
+ The catalog was built by [`cmorph_parquet_vds_catalog.py`](https://github.com/icpac-igad/ibf-thresholds-triggers/blob/xarray-method/thresholds/CMORPH/cmorph_parquet_vds_catalog.py) using:
44
+
45
+ 1. **File discovery** — `fsspec` lists all `*.nc` files on S3 for the requested year range
46
+ 2. **Distributed virtualization** — [Coiled](https://coiled.io/) workers run `VirtualiZarr.open_virtual_dataset()` on batches of 100 files, extracting Kerchunk reference metadata (byte offsets, chunk shapes, codecs) without downloading the full data
47
+ 3. **Streaming Parquet write** — A PyArrow `ParquetWriter` streams each completed batch to a single zstd-compressed Parquet file, keeping coordinator memory constant
48
+
49
+ ```bash
50
+ # Full catalog build (requires Coiled account)
51
+ micromamba run -n aifs-etl python cmorph_parquet_vds_catalog.py catalog \
52
+ --start-year 1998 --end-year 2024 --n-workers 10
53
+
54
+ # Lite mode (listing only, no Coiled)
55
+ micromamba run -n aifs-etl python cmorph_parquet_vds_catalog.py catalog \
56
+ --start-year 1998 --end-year 2024 --lite
57
+
58
+ # Inspect catalog stats
59
+ micromamba run -n aifs-etl python cmorph_parquet_vds_catalog.py info \
60
+ --catalog cmorph-aws-s3-1998-2024.parquet
61
+ ```
62
+
63
+ ## How to Open the Dataset
64
+
65
+ ### 1. Read the Parquet catalog
66
+
67
+ ```python
68
+ import pandas as pd
69
+
70
+ catalog = pd.read_parquet(
71
+ "cmorph-aws-s3-1998-2024.parquet",
72
+ columns=["s3_url", "datetime", "year", "month_key", "status"], # skip kerchunk_refs for fast loads
73
+ )
74
+ print(f"Files: {len(catalog)}")
75
+ print(f"Range: {catalog['datetime'].min()} to {catalog['datetime'].max()}")
76
+ ```
77
+
78
+ ### 2. Open a single file via Kerchunk refs (zero download)
79
+
80
+ ```python
81
+ import json
82
+ import fsspec
83
+ import xarray as xr
84
+ import pyarrow.parquet as pq
85
+
86
+ # Read one row's kerchunk refs (memory-efficient iter_batches)
87
+ pf = pq.ParquetFile("cmorph-aws-s3-1998-2024.parquet")
88
+ for batch in pf.iter_batches(batch_size=1, columns=["kerchunk_refs", "status"]):
89
+ row = batch.to_pydict()
90
+ if row["status"][0] == "success":
91
+ refs = json.loads(row["kerchunk_refs"][0])
92
+ break
93
+
94
+ # Open via fsspec reference filesystem (reads bytes from S3 on demand)
95
+ mapper = fsspec.filesystem("reference", fo=refs).get_mapper("")
96
+ ds = xr.open_dataset(mapper, engine="zarr", consolidated=False)
97
+ print(ds)
98
+ ```
99
+
100
+ ### 3. Open a regional subset with the Icechunk pipeline
101
+
102
+ The companion script [`cmorph_east_africa_icechunk.py`](https://github.com/icpac-igad/ibf-thresholds-triggers/blob/xarray-method/thresholds/CMORPH/cmorph_east_africa_icechunk.py) materializes an East Africa subset (lat: -12 to 23, lon: 21 to 53) into an [Icechunk](https://github.com/earth-mover/icechunk) versioned store on GCS, then rechunks to "pencil" chunks (full time x 5 lat x 5 lon) for fast time-series access:
103
+
104
+ ```bash
105
+ # Step 1: Create empty template store
106
+ python cmorph_east_africa_icechunk.py init \
107
+ --catalog cmorph_vds_catalog/catalog.parquet \
108
+ --gcs-prefix cmorph_ea_subset
109
+
110
+ # Step 2: Fill with real data via Coiled workers reading from S3
111
+ python cmorph_east_africa_icechunk.py fill \
112
+ --catalog cmorph_vds_catalog/catalog.parquet \
113
+ --target-gcs-prefix cmorph_ea_subset --n-workers 20
114
+
115
+ # Step 3: Rechunk to pencil chunks (Dask P2P shuffle)
116
+ python cmorph_east_africa_icechunk.py rechunk \
117
+ --source-gcs-prefix cmorph_ea_subset \
118
+ --target-path gs://cpc_awc/cmorph_ea_pencil --n-workers 20
119
+
120
+ # Step 4: Verify
121
+ python cmorph_east_africa_icechunk.py verify \
122
+ --gcs-prefix cmorph_ea_pencil --store-type zarr
123
+ ```
124
+
125
+ ### 4. Filter the catalog by time range
126
+
127
+ ```python
128
+ import pandas as pd
129
+
130
+ # Load only lightweight columns (fast — skips 223 MB of kerchunk_refs)
131
+ df = pd.read_parquet(
132
+ "cmorph-aws-s3-1998-2024.parquet",
133
+ columns=["s3_url", "datetime", "year", "month", "day"],
134
+ filters=[("year", ">=", 2020), ("year", "<=", 2023)],
135
+ )
136
+ print(f"2020-2023 files: {len(df)}")
137
+ ```
138
+
139
+ ## Architecture
140
+
141
+ ```
142
+ NOAA S3 (public) Parquet Catalog Icechunk Store (GCS)
143
+ ┌──────────────────┐ VirtualiZarr ┌──────────────────┐ materialize ┌──────────────────┐
144
+ │ 236,688 NetCDF │ ──────────────────>│ cmorph-aws-s3- │ ───────────────> │ EA Subset │
145
+ │ files (8km, │ Coiled workers │ 1998-2024 │ Coiled + S3 │ (Icechunk repo) │
146
+ │ 30-min, global) │ + Kerchunk refs │ .parquet │ direct reads │ lat:-12..23 │
147
+ └──────────────────┘ │ (223 MB) │ │ lon: 21..53 │
148
+ └──────────────────┘ └────────┬─────────┘
149
+ │ rechunk
150
+ ┌────────▼─────────┐
151
+ │ Pencil Zarr │
152
+ │ (full-time x │
153
+ │ 5lat x 5lon) │
154
+ └──────────────────┘
155
+ ```
156
+
157
+ ## Dependencies
158
+
159
+ - Python 3.10+
160
+ - `virtualizarr`, `kerchunk`, `fsspec`, `obstore`
161
+ - `pandas`, `pyarrow`, `xarray`, `zarr`
162
+ - `icechunk` (for the East Africa materialized store)
163
+ - `coiled`, `dask.distributed` (for distributed processing)
164
+ - `pystac`, `stac-geoparquet` (for STAC integration — future)
165
+
166
+ ## Related Scripts
167
+
168
+ | Script | Purpose | Link |
169
+ |---|---|---|
170
+ | `cmorph_parquet_vds_catalog.py` | Build the Parquet VDS catalog from S3 | [GitHub](https://github.com/icpac-igad/ibf-thresholds-triggers/blob/xarray-method/thresholds/CMORPH/cmorph_parquet_vds_catalog.py) |
171
+ | `cmorph_east_africa_icechunk.py` | Materialize EA subset + pencil rechunk | [GitHub](https://github.com/icpac-igad/ibf-thresholds-triggers/blob/xarray-method/thresholds/CMORPH/cmorph_east_africa_icechunk.py) |
172
+
173
+ ## License
174
+
175
+ The CMORPH data is produced by NOAA's Climate Prediction Center and is in the public domain. The processing scripts and catalog are part of the [ICPAC IGAD IBF Thresholds & Triggers](https://github.com/icpac-igad/ibf-thresholds-triggers) project.