ShadowTransfer / README.md
shadow-transfer-bench's picture
Update README.md
a744f22 verified
---
license: cc-by-4.0
task_categories:
- image-to-image
language:
- en
tags:
- shadow-removal
- shadow-transfer
- shadow-generation
- benchmark
- computer-vision
size_categories:
- 10K<n<100K
pretty_name: ShadowTransfer
---
# ShadowTransfer
A benchmark for measuring **geographic transfer** in overhead shadow detection. 4,500 human-verified shadow masks across three U.S. cities (Chicago, Miami, Phoenix) at two native NAIP resolutions (0.3 m/px, 0.6 m/px), released in two complementary forms:
- **`data_cities/`** — raw per-city dataset organized by city, resolution, and split. Use this when you need full control over splits or want to construct your own protocols.
- **`data_loco/`** — pre-built leave-one-city-out (LOCO) folds derived from `data_cities/`. Use this when you want to reproduce the paper's transfer evaluation, or when comparing a new method against the reported baselines.
> Both directories contain the **same underlying images and masks**. `data_loco/` is a re-organization of `data_cities/` into the LOCO protocol with frozen, paper-matched train/val/test counts. Pick whichever matches your workflow.
---
## Quick start
```bash
# Hosted at:
# https://huggingface.co/datasets/shadow-transfer-bench/ShadowTransfer
from huggingface_hub import snapshot_download
snapshot_download(repo_id="shadow-transfer-bench/ShadowTransfer",
repo_type="dataset", local_dir="ShadowTransfer")
```
To reproduce the paper's LOCO numbers, point any segmentation training pipeline at one fold:
```
ShadowTransfer/data_loco/fold_0_holdout_phoenix/highres/
train/images/ train/masks/
val/images/ val/masks/
test/images/ test/masks/
```
That's it — `train/`, `val/`, and `test/` already contain the 450 / 150 / 150 images the paper uses.
---
## Schema
### `data_cities/` — per-city raw dataset
```
data_cities/
├── chicago/
│ ├── highres/ # 0.3 m/px native NAIP
│ │ ├── train/
│ │ │ ├── images/ # 450 RGB .png, 384×384
│ │ │ ├── masks/ # 450 binary .png (0 / 255)
│ │ │ └── masks_multiclass/ # optional, 0–6 class IDs (see below)
│ │ ├── val/
│ │ │ ├── images/ # 150 .png
│ │ │ ├── masks/ # 150 .png
│ │ │ └── masks_multiclass/
│ │ ├── test/
│ │ │ ├── images/ # 150 .png
│ │ │ ├── masks/ # 150 .png
│ │ │ └── masks_multiclass/
│ │ ├── metadata_train.json
│ │ ├── metadata_val.json
│ │ └── metadata_test.json
│ └── midres/ # 0.6 m/px native NAIP, same layout
├── miami/ # same layout
└── phoenix/ # same layout
```
**Counts (per city, per resolution):** 450 train + 150 val + 150 test = **750 images**.
**Total:** 3 cities × 2 resolutions × 750 = **4,500 images**.
**File formats**
| Path | Type | Encoding |
| --- | --- | --- |
| `images/*.png` | RGB image | 8-bit, 3 channels, 384×384 |
| `masks/*.png` | binary shadow mask | 8-bit, 1 channel, `{0, 255}` (255 = shadow) |
| `masks_multiclass/*.png` | multiclass mask | 8-bit, 1 channel, integer class IDs `0–6` |
**Multiclass IDs** (used in `masks_multiclass/`):
| ID | Class |
| --- | --- |
| 0 | Background (no shadow) |
| 1 | Building / canyon shadow |
| 2 | Under-structure shadow |
| 3 | Tree-canopy dapple |
| 4 | Topography-cast shadow |
| 5 | Vehicle-cast shadow |
| 6 | Thin-linear shadow |
The benchmark in the paper evaluates on binary masks only; the multiclass masks are released for downstream analysis. Image and mask filenames match within a split (`images/foo.png` ↔ `masks/foo.png`).
**`metadata_{split}.json`** — one JSON list per split, one entry per image:
```jsonc
{
"original_filename": "phoenix_session01_highres_paired_010.png",
"random_filename": "img_005.png", // anonymized name on disk
"city": "phoenix",
"resolution": "highres", // "highres" (0.3 m) | "midres" (0.6 m)
"split": "test", // "train" | "val" | "test"
"type": "type2", // sampling scheme tag
"image_type": "paired", // "paired" if also in the other resolution
"pair_id": "010", // links a paired pair across resolutions
"center_lon": -112.17278007840696,
"center_lat": 33.443872697021,
"tile_name": "m_3311239_ne_12_030_20230917", // source NAIP tile
"source_session": 1,
"annotation_session": 31,
"session_num": 31,
"has_annotations": true,
"shadow_types": ["Building/canyon shadow",
"Vehicle-cast shadow",
"Tree-canopy dapple"]
}
```
The on-disk filename is `random_filename`. `original_filename` is the human-readable name. `pair_id` lets you join the 0.3 m/px and 0.6 m/px patches that share ground coordinates (300 paired patches per city — see paper §3.1).
---
### `data_loco/` — pre-built LOCO folds
Three folds, one per held-out city. Each fold contains the same train / val / test directory layout as the per-city dataset, plus a `manifest.json` and per-split metadata.
```
data_loco/
├── fold_0_holdout_phoenix/ # train: chicago + miami, test: phoenix
│ ├── highres/ # 0.3 m/px
│ │ ├── manifest.json # provenance + counts
│ │ ├── metadata_train.json
│ │ ├── metadata_val.json
│ │ ├── metadata_test.json
│ │ ├── train/
│ │ │ ├── images/ # 450 .png (225 chicago + 225 miami)
│ │ │ ├── masks/ # 450 .png
│ │ │ └── masks_multiclass/ # where present upstream
│ │ ├── val/
│ │ │ ├── images/ # 150 .png (75 chicago + 75 miami)
│ │ │ ├── masks/ # 150 .png
│ │ │ └── masks_multiclass/
│ │ └── test/
│ │ ├── images/ # 150 .png (full phoenix test pool)
│ │ ├── masks/ # 150 .png
│ │ └── masks_multiclass/
│ └── midres/ # 0.6 m/px, same layout
├── fold_1_holdout_miami/ # train: chicago + phoenix, test: miami
└── fold_2_holdout_chicago/ # train: miami + phoenix, test: chicago
```
**Filename convention.** In `train/` and `val/`, files are renamed `{source_city}__{original}.png` (e.g. `chicago__img_017.png`) so the two source cities cannot collide and provenance is visible at a glance. In `test/` files keep their original names because they come from a single source city. Image and mask filenames remain matched within a split.
**`metadata_{split}.json`** — same fields as the per-city metadata, plus LOCO context:
```jsonc
{
// ... all per-city fields preserved as-is, plus:
"loco_filename": "chicago__img_017.png",
"loco_split": "train", // "train" | "val" | "test" in this fold
"loco_fold_id": 0,
"loco_holdout_city": "phoenix",
"loco_resolution": "highres",
"source_city": "chicago",
"source_split": "train", // which per-city split it came from
"has_masks_multiclass": true
}
```
**`manifest.json`** records the build parameters, per-city counts, and the full file list — enough to re-derive the fold from `data_cities/` exactly.
**Counts per fold per resolution:** 450 train (225 per training city) + 150 val (75 per training city) + 150 test (held-out city's full test pool).
---
## Intended use
- **Primary use**: benchmarking shadow detection methods on overhead aerial imagery, with explicit measurement of geographic transfer (`data_loco/`) or of in-domain performance per city (`data_cities/`).
- **Secondary uses**: building footprint and façade extraction (binary masks act as occlusion priors); shadow-removal and de-shadowing research; domain generalization research on dense prediction tasks; pretraining for related overhead segmentation tasks; analysis of urban morphology and solar geometry from the included `center_lat` / `center_lon` and shadow type labels.
For ML transfer evaluation specifically, **report results on `data_loco/` using all three folds and both resolutions** (six test cells per method) and use the within-city upper-bound numbers from the paper as the comparison baseline. Per-cell paired bootstrap is recommended for significance — see the paper for the exact protocol.
---
## Known limitations
- **Three U.S. cities only.** Chicago, Miami, and Phoenix span distinct climates and morphologies but share North American grid-pattern urbanism. Generalization to dense historic European cities, informal settlements, or non-grid morphologies (e.g. Mumbai, Cairo, Marrakech) is untested.
- **NAIP RGB only.** No multispectral or near-infrared bands. Sensor characteristics, color processing, and acquisition conventions are NAIP-specific.
- **Single fall season.** All imagery comes from a single seasonal window; deciduous-canopy bare-vs-leaf-on variation is not represented.
- **Native-resolution releases only.** The 0.3 m and 0.6 m subsets come from separate native NAIP acquisitions, not from downsampling the same source. Do not synthesize one from the other if your goal is to study resolution transfer.
- **Boundary uncertainty.** Shadow edges are inherently soft; we recommend tolerant-mIoU evaluation with a ±2 px don't-care band (see paper §3.3). Strict pixel-exact metrics will systematically penalize all methods at the boundary.
- **Multiclass coverage.** `masks_multiclass/` is provided where reliable typing was possible; sparse classes (vehicle-cast, thin-linear) have low per-image counts and are not recommended as primary evaluation targets.
- **Annotation noise.** Even with three-phase QC and inter-annotator-agreement monitoring, a small residual disagreement rate (≈3% of segments adjudicated as borderline) is expected.
---
## License and attribution
- **Source imagery (NAIP).** USDA Farm Service Agency National Agriculture Imagery Program. NAIP imagery acquired through 2019 is in the U.S. public domain; later releases are published as public-domain-with-attribution by USDA-FSA APFO. Users of the imagery in derived products are asked to credit the USDA Farm Service Agency Aerial Photography Field Office (APFO).
- **Annotations and metadata.** The shadow masks (`masks/`, `masks_multiclass/`) and metadata files (`metadata_*.json`, `manifest.json`) are released under the **Creative Commons Attribution 4.0 International (CC BY 4.0)** license.
- **Required citation when using the dataset.**
```
ShadowTransfer authors. ShadowTransfer: A Geographic Transfer Benchmark
for Overhead Shadow Detection. NeurIPS 2026 Datasets & Benchmarks Track.
```
Please also cite USDA-FSA NAIP for the underlying imagery.
---
## Hosting and DOI
- **Primary host**: <https://huggingface.co/datasets/shadow-transfer-bench/ShadowTransfer>
- **DOI**: assigned via the Hugging Face dataset record (visible on the dataset card).
- **Mirror / archival copy**: see the dataset card for the latest mirror list.
---
## Documentation
Two structured-documentation artifacts accompany the release:
- **`DATASHEET.md`** — a Datasheet for Datasets in the format of Gebru et al. (2021), covering motivation, composition, collection, preprocessing, uses, distribution, and maintenance.
- **Croissant metadata** — machine-readable dataset description in the [MLCommons Croissant](https://mlcommons.org/working-groups/data/croissant/) format. Hugging Face auto-generates and serves this for every Hub dataset; fetch it at:
```
https://huggingface.co/api/datasets/shadow-transfer-bench/ShadowTransfer/croissant
```
This file is consumable by `mlcroissant`, TFDS, and any Croissant-aware loader.
---
## Maintenance
Issues, errata, and corrections: file an issue on the Hugging Face dataset page or open a pull request on the accompanying GitHub repository linked from the dataset card. Versioned releases are tagged on Hugging Face; the version used for the published paper results is tagged `v1.0`.
For questions about the LOCO protocol or the diagnostic framework, see the paper. For questions about the annotation pipeline, see Appendix A of the paper.