E4DRR commited on
Commit
54bb570
·
verified ·
1 Parent(s): b8df6ba

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +58 -0
README.md CHANGED
@@ -328,6 +328,64 @@ ea = ds["tp"].sel(step=24, latitude=slice(25, -14),
328
  | 2026 | Jan--Feb | 51 | ~7,344 | ~0.8 GB |
329
  | **Total** | | **720** | **144,228** | **~17.8 GB** |
330
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
331
  ## How It Works
332
 
333
  The **Grib-Index-Kerchunk (GIK)** method applies the same principle as video streaming to weather data:
 
328
  | 2026 | Jan--Feb | 51 | ~7,344 | ~0.8 GB |
329
  | **Total** | | **720** | **144,228** | **~17.8 GB** |
330
 
331
+ ## Catalog / Index
332
+
333
+ A lightweight **catalog.parquet** (~1.8 MB) at the repo root indexes all 150,246 parquet files
334
+ across the dataset. Use it to discover available dates, runs, and members without listing the
335
+ full repo tree.
336
+
337
+ **Download**: [`catalog.parquet`](catalog.parquet)
338
+
339
+ | Column | Example | Description |
340
+ |--------|---------|-------------|
341
+ | `year` | `2024` | Forecast year |
342
+ | `month` | `03` | Forecast month |
343
+ | `date` | `20240301` | Forecast date (YYYYMMDD) |
344
+ | `run` | `00z` | Run hour |
345
+ | `member` | `control` | Ensemble member name |
346
+ | `filename` | `2024030100z-control.parquet` | Parquet filename |
347
+ | `hf_path` | `run_par_ecmwf/2024/03/20240301/00z/...` | Full path in this repo |
348
+ | `size_bytes` | `107520` | File size in bytes |
349
+
350
+ ### Quick Start with the Catalog
351
+
352
+ ```python
353
+ import pandas as pd
354
+ from huggingface_hub import hf_hub_download
355
+
356
+ # 1. Load the catalog (~1.8 MB, indexes all 150k+ files)
357
+ catalog_path = hf_hub_download(
358
+ repo_id="E4DRR/gik-ecmwf-par", repo_type="dataset",
359
+ filename="catalog.parquet"
360
+ )
361
+ catalog = pd.read_parquet(catalog_path)
362
+
363
+ # 2. Explore what's available
364
+ print(catalog.groupby(["year", "month"]).size()) # files per month
365
+ print(catalog["run"].unique()) # ['00z','06z','12z','18z']
366
+ print(catalog["member"].nunique()) # 51
367
+
368
+ # 3. Filter for a specific date + run
369
+ subset = catalog[(catalog["date"] == "20250101") & (catalog["run"] == "00z")]
370
+ print(subset[["member", "filename", "size_bytes"]])
371
+
372
+ # 4. Download a specific parquet using its hf_path
373
+ row = subset.iloc[0]
374
+ parquet_path = hf_hub_download(
375
+ repo_id="E4DRR/gik-ecmwf-par", repo_type="dataset",
376
+ filename=row["hf_path"]
377
+ )
378
+ ```
379
+
380
+ ### Coverage Summary (from catalog)
381
+
382
+ | Year | Months | Dates | Files | Total Size |
383
+ |------|--------|-------|-------|------------|
384
+ | 2024 | Mar--Dec | 306 | ~62,424 | ~6.5 GB |
385
+ | 2025 | Jan--Dec | 365 | ~75,504 | ~7.8 GB |
386
+ | 2026 | Jan--Mar 7 | 66 | ~12,318 | ~1.3 GB |
387
+ | **Total** | | **737** | **150,246** | **~18.5 GB** |
388
+
389
  ## How It Works
390
 
391
  The **Grib-Index-Kerchunk (GIK)** method applies the same principle as video streaming to weather data: