Datasets:

Modalities:
Image
Languages:
English
File size: 1,126 Bytes
023c98b
 
 
 
 
 
 
 
405b171
 
a507b8c
023c98b
 
 
 
915749e
 
 
 
023c98b
 
 
 
 
 
915749e
023c98b
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
language:
- en
size_categories:
- 100K<n<1M
---
Dataset Summary


![image/png](https://cdn-uploads.huggingface.co/production/uploads/66f74fd5b50ecf6a454941d1/RxXFKLHmh_RQk-DgpE_fV.png)

DeepShade is a multimodal dataset designed for shade simulation via text-conditioned image generation. It captures realistic outdoor scenes and their corresponding shade conditions over time, enabling supervised training of diffusion-based models that can simulate sun-shade transitions based on spatial layout and temporal context.

The dataset was introduced in the DeepShade project (IJCAI 2025 submission), and it supports research in text-to-image generation.

Dataset Files Structure:

Each city has a dedicated zip folder which contains source, target and the test and train json files
There is one common zip file for all the satellite images of all the cities

Dataset Structure

Data Modality: Multimodal (Image, Text, Time)
Number of samples: ~100,000
Resolution: 1024×1024  images

Formats:
.png images (rendered)
.json metadata files with the following fields:
1. Source image file path
2. Target image file path
3. Prompt