Datasets:

Languages:
English
ShadeBench / README.md
mythesticc's picture
Update README.md
2ba9084 verified
metadata
language:
  - en
size_categories:
  - 100K<n<1M

Dataset Card for ShadeBench

image

ShadeBench is a large-scale multimodal dataset for urban shade simulation and understanding, supporting tasks such as generation, segmentation, and 3D reconstruction.

Originally introduced in the DeepShade project (IJCAI 2025), ShadeBench is an extended benchmark submitted to KDD 2026 (AI4Science Track), with improved spatial alignment, physically grounded solar modeling, and additional 3D geometry.

Dataset Details

  1. Cities: 34 (across 6 continents)
  2. Total Files: ~137,000
  3. Resolution: 1024 × 1024
  4. Modalities: Image, Text, Time, 3D Geometry

Each city contains:

  1. source/ and target/ image pairs (shade transitions)
  2. train.json and test.json (with prompts and paths)
  3. Satellite images (real-world context)
  4. Masked images (buildings aligned with simulation)
  5. obj_grids/ (3D building meshes in .obj format)

Key Features

  1. Spatially aligned data between satellite imagery and simulated building layouts
  2. Physically consistent shadows using solar position modeling
  3. Temporal variation across different times of day
  4. Multimodal learning support (image + text + geometry)

Use Cases

  1. Text-conditioned image generation
  2. Shadow simulation and prediction
  3. Shade segmentation
  4. 3D urban reconstruction
  5. GIS + ML research

Summmary

ShadeBench extends DeepShade into a benchmark dataset by adding aligned satellite data, masked building regions, and 3D geometry, enabling realistic and scalable modeling of urban shade.