metadata
language:
- en
size_categories:
- 100K<n<1M
Dataset Card for ShadeBench
ShadeBench is a large-scale multimodal dataset for urban shade simulation and understanding, supporting tasks such as generation, segmentation, and 3D reconstruction.
Originally introduced in the DeepShade project (IJCAI 2025), ShadeBench is an extended benchmark submitted to KDD 2026 (AI4Science Track), with improved spatial alignment, physically grounded solar modeling, and additional 3D geometry.
Dataset Details
- Cities: 34 (across 6 continents)
- Total Files: ~137,000
- Resolution: 1024 × 1024
- Modalities: Image, Text, Time, 3D Geometry
Each city contains:
- source/ and target/ image pairs (shade transitions)
- train.json and test.json (with prompts and paths)
- Satellite images (real-world context)
- Masked images (buildings aligned with simulation)
- obj_grids/ (3D building meshes in .obj format)
Key Features
- Spatially aligned data between satellite imagery and simulated building layouts
- Physically consistent shadows using solar position modeling
- Temporal variation across different times of day
- Multimodal learning support (image + text + geometry)
Use Cases
- Text-conditioned image generation
- Shadow simulation and prediction
- Shade segmentation
- 3D urban reconstruction
- GIS + ML research
Summmary
ShadeBench extends DeepShade into a benchmark dataset by adding aligned satellite data, masked building regions, and 3D geometry, enabling realistic and scalable modeling of urban shade.
