Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -59,7 +59,7 @@ dataset_info:
|
|
| 59 |
## Dataset Summary
|
| 60 |
- Generation code and pipeline: https://github.com/Anfera/HHDC-Creator (HHDC-Creator repo).
|
| 61 |
- 3-D photon-count waveforms (Hyperheight data cubes) built from NEON discrete-return LiDAR using the HHDC pipeline (`hhdc/cube_generator.py`).
|
| 62 |
-
- Each cube stores a high-resolution canopy volume (default: 0.5 m vertical bins over 64 m height, footprints every 2 m) across a 96 m × 96 m tile
|
| 63 |
- Inputs for learning are simulated observations from the physics-based forward imaging model (`hhdc/forward_model.py`) that emulates the Concurrent Artificially-intelligent Spectrometry and Adaptive Lidar System (CASALS), applying Gaussian beam aggregation, distance-based photon loss, and mixed Poisson + Gaussian noise to downsample/perturb the cube.
|
| 64 |
- Targets are the clean, high-resolution cubes. The pairing supports denoising and spatial super-resolution with recommended settings of 10 m diameter footprints sampled on a 3 m × 6 m grid (along/across swath); users can adjust these parameters as needed.
|
| 65 |
|
|
@@ -69,16 +69,40 @@ dataset_info:
|
|
| 69 |
- Robust reconstruction under realistic sensor noise simulated by the forward model.
|
| 70 |
|
| 71 |
## Dataset Structure
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
-
|
| 80 |
-
|
| 81 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 82 |
|
| 83 |
## Usage
|
| 84 |
```python
|
|
@@ -122,7 +146,7 @@ loss.backward()
|
|
| 122 |
- Recommended metrics: PSNR and SSIM on the canopy height model (CHM), digital terrain model (DTM), and 50th percentile height maps (all derivable via `hhdc.canopy_plots.create_chm` in the HHDC-Creator repo).
|
| 123 |
|
| 124 |
## Limitations and Risks
|
| 125 |
-
- Forward model parameters (beam diameter, noise levels, output resolution, altitude) control task difficulty;
|
| 126 |
- Outputs are simulated; real sensor artifacts (boresight errors, occlusions, calibration drift) are not modeled.
|
| 127 |
- NEON LiDAR is collected over North America; models may not generalize to other biomes or sensor geometries without adaptation.
|
| 128 |
|
|
|
|
| 59 |
## Dataset Summary
|
| 60 |
- Generation code and pipeline: https://github.com/Anfera/HHDC-Creator (HHDC-Creator repo).
|
| 61 |
- 3-D photon-count waveforms (Hyperheight data cubes) built from NEON discrete-return LiDAR using the HHDC pipeline (`hhdc/cube_generator.py`).
|
| 62 |
+
- Each cube stores a high-resolution canopy volume (default: 0.5 m vertical bins over 64 m height, footprints every 2 m) across a 96 m × 96 m tile. In the HHDC-Creator pipeline, the exact settings are recorded per-sample in metadata, but this HF dataset only exposes the processed cubes and filenames.
|
| 63 |
- Inputs for learning are simulated observations from the physics-based forward imaging model (`hhdc/forward_model.py`) that emulates the Concurrent Artificially-intelligent Spectrometry and Adaptive Lidar System (CASALS), applying Gaussian beam aggregation, distance-based photon loss, and mixed Poisson + Gaussian noise to downsample/perturb the cube.
|
| 64 |
- Targets are the clean, high-resolution cubes. The pairing supports denoising and spatial super-resolution with recommended settings of 10 m diameter footprints sampled on a 3 m × 6 m grid (along/across swath); users can adjust these parameters as needed.
|
| 65 |
|
|
|
|
| 69 |
- Robust reconstruction under realistic sensor noise simulated by the forward model.
|
| 70 |
|
| 71 |
## Dataset Structure
|
| 72 |
+
|
| 73 |
+
### Storage and splits
|
| 74 |
+
|
| 75 |
+
- **Format on the Hub:** Apache Arrow / Parquet, managed by 🤗 Datasets.
|
| 76 |
+
- **Access:** via `load_dataset("anfera236/HHDC", split=...)`.
|
| 77 |
+
- **Splits:** `train`, `validation`, `test` (see `dataset_info` for exact sizes).
|
| 78 |
+
|
| 79 |
+
### Per-sample fields
|
| 80 |
+
|
| 81 |
+
Each sample in this Hugging Face dataset contains:
|
| 82 |
+
|
| 83 |
+
- **`cube`** — `float32`, shape `[128, 48, 48]`
|
| 84 |
+
High-resolution Hyperheight data cube (channel-first: `[bins, H, W]`), derived from NEON discrete-return LiDAR using the HHDC-Creator pipeline.
|
| 85 |
+
- **`filename`** — `string`
|
| 86 |
+
Identifier for the source tile / sample (matches the tile-level naming used in HHDC-Creator).
|
| 87 |
+
|
| 88 |
+
Additional fields produced by the HHDC-Creator pipeline (e.g. `x_centers`, `y_centers`, `bin_edges`, `footprint_counts`, `metadata`) are **not stored** in this HF dataset. They can be regenerated from NEON AOP LiDAR using the code in the HHDC-Creator repository.
|
| 89 |
+
|
| 90 |
+
### Typical shapes and forward model
|
| 91 |
+
|
| 92 |
+
With the default cube configuration (e.g. `cube_config_sample.json`, `cube_length = 96 m`, `footprint_separation = 2 m`):
|
| 93 |
+
|
| 94 |
+
- **Clean high-res cube (`cube`):** `[128, 48, 48]`
|
| 95 |
+
- 64 m vertical extent / 0.5 m bins → 128 height bins
|
| 96 |
+
- 96 m × 96 m tile / 2 m grid → 48 × 48 footprints
|
| 97 |
+
|
| 98 |
+
Low-resolution, noisy measurements are **generated on the fly** using the physics-based forward model (`LidarForwardImagingModel` in HHDC-Creator). For example, with `output_res_m=(3.0, 6.0)`:
|
| 99 |
+
|
| 100 |
+
- **Noisy cube (model output, not stored in the dataset):** `[128, 32, 16]`
|
| 101 |
+
|
| 102 |
+
Users are expected to:
|
| 103 |
+
1. Load `cube` from this dataset as the clean target.
|
| 104 |
+
2. Apply the forward model to obtain noisy / low-res inputs for denoising and super-resolution experiments.
|
| 105 |
+
|
| 106 |
|
| 107 |
## Usage
|
| 108 |
```python
|
|
|
|
| 146 |
- Recommended metrics: PSNR and SSIM on the canopy height model (CHM), digital terrain model (DTM), and 50th percentile height maps (all derivable via `hhdc.canopy_plots.create_chm` in the HHDC-Creator repo).
|
| 147 |
|
| 148 |
## Limitations and Risks
|
| 149 |
+
- Forward model parameters (beam diameter, noise levels, output resolution, altitude) control task difficulty; we recommend documenting the values you use per experiment (e.g., in your own metadata/config). In the original HHDC-Creator pipeline these are stored per-sample in metadata, but this HF dataset does not include that field.
|
| 150 |
- Outputs are simulated; real sensor artifacts (boresight errors, occlusions, calibration drift) are not modeled.
|
| 151 |
- NEON LiDAR is collected over North America; models may not generalize to other biomes or sensor geometries without adaptation.
|
| 152 |
|