File size: 4,337 Bytes
85df42f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aface89
 
85df42f
 
 
 
 
 
 
 
 
 
 
 
 
 
dd1fc85
 
 
 
85df42f
 
dd1fc85
 
 
85df42f
 
dd1fc85
85df42f
 
 
 
dd1fc85
 
 
85df42f
 
dd1fc85
85df42f
 
dd1fc85
85df42f
 
 
dd1fc85
 
85df42f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
---
license: cc-by-4.0
language:
- en
tags:
- image-to-image
- shadow-removal
- synthetic-dataset
- 3dfront
- objaverse
- aaai-2025
task_categories:
- image-to-image
size_categories:
- 10K<n<100K
---

# Dataset Card for OmniSR: Shadow Removal under Direct and Indirect Lighting

## Dataset Description

- **Repository:** [Your Hugging Face Dataset Link]
- **Paper:** [Link to Paper (e.g., arXiv or AAAI proceedings)]
- **Curated by:** Jiamin Xu, Zelong Li, Yuxin Zheng, Chenyu Huang, Renshu Gu, Gang Xu (Hangzhou Dianzi University); Weiwei Xu (Zhejiang University)
- **Language(s):** N/A (Image dataset)
- **License:** cc-by-4.0

### Dataset Summary

This is the official dataset for the paper **"OmniSR: Shadow Removal under Direct and Indirect Lighting"** (AAAI 2025). It is a large-scale synthetic image dataset specifically designed for studying shadow removal under both direct and indirect illumination conditions.

The dataset contains over **30,000 pairs** of images. Each pair consists of:
1.  An image with shadows caused by direct lighting and indirect lighting.
2.  A corresponding shadow-free ground truth image.

It was rendered using the **3D-Front** and **Objaverse** 3D model libraries, covering a wide variety of indoor scenes, object types, and lighting conditions.

## Uses

### Direct Use
The primary use of this dataset is for training and evaluating **image shadow removal models**, particularly those aiming to handle complex lighting scenarios involving both direct and indirect shadows.

### Out-of-Scope Use
As a purely synthetic dataset, models trained solely on it may not generalize perfectly to real-world photographs without additional fine-tuning or domain adaptation techniques.

## Dataset Structure

### Data Instances
Data is organized into folders. A typical data instance comprises three image files:
- `direct_shadow.png`
- `indirect_shadow.png`
- `shadow_free.png`

### Data Fields
- `direct_shadow`: Path to the image file containing shadows from direct light.
- `indirect_shadow`: Path to the image file containing shadows from indirect light (e.g., light bouncing off surfaces).
- `shadow_free`: Path to the corresponding ground truth image with no shadows.

### Data Splits
The full dataset contains over 30,000 image pairs. For specific details on training, validation, and test splits, please refer to the original paper or the split files included with the dataset release.

## Dataset Creation

### Source Data
The dataset was generated through a custom rendering pipeline using two major 3D assets sources:
- **[3D-Front](https://tianchi.aliyun.com/specials/promotion/3dfront):** Used for generating indoor scene layouts.
- **[Objaverse](https://objaverse.allenai.org/):** Used to populate scenes with a diverse set of 3D objects.

### Annotations
The data is automatically generated. Shadow states (present/absent) are controlled by the rendering engine's lighting setup, so **no manual human annotation** was involved.

### Personal and Sensitive Information
This dataset consists entirely of **synthetic, virtual scenes**. It does not contain any portraits, personal information, or real-world sensitive data.

## Bias, Risks, and Limitations

- **Synthetic Domain Gap:** The primary limitation is its synthetic nature. Models may need adaptation to perform well on real-world images.
- **Scene and Object Bias:** The dataset's content is derived from 3D-Front and Objaverse, which may not represent the full diversity of real-world objects, scenes, and lighting conditions. This potential distribution bias should be considered when using the dataset.

### Recommendations
Users are encouraged to:
1.  Be aware of the synthetic-to-real domain gap.
2.  Use the dataset in conjunction with real-world shadow removal data if the target application is real-world photos.
3.  Evaluate models on diverse test sets to understand their generalization capabilities.

## Citation

If you use this dataset in your research, please cite the following paper:

**BibTeX:**
```bibtex
@inproceedings{xu2024omnisr,
    title={OmniSR: Shadow Removal under Direct and Indirect Lighting},
    author={Xu, Jiamin and Li, Zelong and Zheng, Yuxin and Huang, Chenyu and Gu, Renshu and Xu, Weiwei and Xu, Gang},
    booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
    year={2025}
}