Datasets:

Modalities:
Image
Languages:
English
DeepShade / README.md
mythesticc's picture
Update README.md
405b171 verified
metadata
language:
  - en
size_categories:
  - 100K<n<1M

Dataset Summary

image/png

DeepShade is a multimodal dataset designed for shade simulation via text-conditioned image generation. It captures realistic outdoor scenes and their corresponding shade conditions over time, enabling supervised training of diffusion-based models that can simulate sun-shade transitions based on spatial layout and temporal context.

The dataset was introduced in the DeepShade project (IJCAI 2025 submission), and it supports research in text-to-image generation.

Dataset Files Structure:

Each city has a dedicated zip folder which contains source, target and the test and train json files There is one common zip file for all the satellite images of all the cities

Dataset Structure

Data Modality: Multimodal (Image, Text, Time) Number of samples: ~100,000 Resolution: 1024×1024 images

Formats: .png images (rendered) .json metadata files with the following fields:

  1. Source image file path
  2. Target image file path
  3. Prompt