The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
01-emit-dataset canary
Compact, ML-split-ready slice of the 01-emit-dataset
pipeline: 5 EMIT scenes already processed to ortho rgb/enh/mask_bin arrays,
plus 512×512 plume/no-plume crops with masks.
The canary is the inputs side for figure regeneration and downstream-pipeline testing without NASA Earthdata credentials. The full corpus (raw L1B/L2B, multi-scene catalog, train manifest) lives elsewhere.
Scenes
| split | n | timestamps |
|---|---|---|
| test | 1 | 20220814T051412 |
| train | 3 | 20220810T064957, 20220811T042630, 20250922T204933 |
| val | 1 | 20220810T065132 |
All scenes are stored in the scenes/<timestamp>/ directory with three files:
rgb.npy— float32, ortho-projected RGB rendering (3 channels)enh.npy— float32, EMIT L2B CH4ENH (methane enhancement, ppm·m)mask_bin.npy— uint8, plume binary mask derived from L2B CH4PLM
Crops
crops/{train,val}/{crops,masks}/*.npy — 512×512 patches, 4-channel (R,G,B,enh),
plus single-channel uint8 masks. Counts: pos=4, neg_hard=4, neg_easy=4.
Augmentation = none, strict_ok = true (originals only). Metadata in crops/metadata.jsonl.
Catalog
catalog.json — subset of the full pipeline catalog with the original EMIT product
filenames per scene (RAD / OBS / CH4ENH / CH4PLM). Useful for citing the source
granules; not needed to run the canary downstream pipeline.
Usage
from huggingface_hub import snapshot_download
local = snapshot_download(
repo_id="SamTr7/01-emit-dataset-canary",
repo_type="dataset",
local_dir="data/canary",
)
Or via the project script: python scripts/setup_canary.py.
Source
Built from 01-emit-dataset processed-ortho outputs. EMIT data is courtesy of NASA/JPL
under the EMIT Open Data License.
- Downloads last month
- 12