Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -36,66 +36,47 @@ The dataset contains over **30,000 pairs** of images. Each pair consists of:
|
|
| 36 |
|
| 37 |
It was rendered using the **3D-Front** and **Objaverse** 3D model libraries, covering a wide variety of indoor scenes, object types, and lighting conditions.
|
| 38 |
|
| 39 |
-
### Method Overview
|
| 40 |
-
|
| 41 |
-
下图展示了我们提出的 OmniSR 网络架构。对于每张输入 RGB 图像,我们首先提取其 DINO 特征、深度图和法向图。然后,RGB 图像和深度图被输入到阴影去除网络中。网络包含多个 Context-aware Swin Attention (CSA) 层,每层由两个 Swin self-attention 块组成。与传统 self-attention 不同,我们的块显式地融入了语义感知和几何感知的注意力权重。
|
| 42 |
-
|
| 43 |
-
 <!-- 请替换为您的实际图片路径 -->
|
| 44 |
-
*图1: OmniSR 网络架构概述。输入包括 RGB、深度、DINO 特征和法向图,通过多尺度 CSA 层生成无阴影图像。*
|
| 45 |
-
|
| 46 |
## Uses
|
| 47 |
|
| 48 |
### Direct Use
|
| 49 |
The primary use of this dataset is for training and evaluating **image shadow removal models**, particularly those aiming to handle complex lighting scenarios involving both direct and indirect shadows.
|
| 50 |
|
| 51 |
-
我们在该数据集上训练了 OmniSR 模型,并与现有方法进行了对比。下图展示了在相同数据集上训练的不同方法的效果对比。可以看到,我们的方法在直接/间接阴影去除上都取得了更好的效果。
|
| 52 |
-
|
| 53 |
-
 <!-- 请替换为您的实际图片路径 -->
|
| 54 |
-
*图2: 对比结果。从上到下依次为:输入图像、ShadowFormer (在 ISTD+ 训练)、DMTN (在 ISTD+ 训练)、ShadowFormer (在我们的数据集训练)、DMTN (在我们的数据集训练)、Our results (在我们的数据集训练)。*
|
| 55 |
-
|
| 56 |
### Out-of-Scope Use
|
| 57 |
As a purely synthetic dataset, models trained solely on it may not generalize perfectly to real-world photographs without additional fine-tuning or domain adaptation techniques.
|
| 58 |
|
| 59 |
## Dataset Structure
|
| 60 |
|
| 61 |
### Data Instances
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
|
| 66 |
|
| 67 |
### Data Fields
|
| 68 |
-
- `direct_shadow`:
|
| 69 |
-
- `indirect_shadow`:
|
| 70 |
-
- `shadow_free`:
|
| 71 |
|
| 72 |
### Data Splits
|
| 73 |
-
|
| 74 |
|
| 75 |
## Dataset Creation
|
| 76 |
|
| 77 |
### Source Data
|
| 78 |
-
|
| 79 |
-
- **3D-Front:**
|
| 80 |
-
- **Objaverse:**
|
| 81 |
|
| 82 |
### Annotations
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
 <!-- 请替换为您的实际图片路径 -->
|
| 86 |
-
*图4: 阴影/无阴影概率图。左侧为阴影区域概率分布,右侧为无阴影区域概率分布。*
|
| 87 |
|
| 88 |
### Personal and Sensitive Information
|
| 89 |
-
|
| 90 |
|
| 91 |
## Bias, Risks, and Limitations
|
| 92 |
|
| 93 |
-
- **Synthetic Domain Gap:**
|
| 94 |
-
|
| 95 |
-
 <!-- 请替换为您的实际图片路径 -->
|
| 96 |
-
*图5: 真实图像与合成数据对比。从上到下依次为:真实拍摄图像、ShadowFormer (在 ISTD+ 训练)、DMTN (在 ISTD+ 训练)、ShadowFormer (在我们的数据集训练)、DMTN (在我们的数据集训练)、Our results (在我们的数据集训练)。*
|
| 97 |
-
|
| 98 |
-
- **Scene and Object Bias:** 数据集的场景和物体主要来源于 3D-Front 和 Objaverse,可能无法覆盖真实世界中所有可能的物体和光照场景。这种潜在的分布偏差在使用时应注意。
|
| 99 |
|
| 100 |
### Recommendations
|
| 101 |
Users are encouraged to:
|
|
|
|
| 36 |
|
| 37 |
It was rendered using the **3D-Front** and **Objaverse** 3D model libraries, covering a wide variety of indoor scenes, object types, and lighting conditions.
|
| 38 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 39 |
## Uses
|
| 40 |
|
| 41 |
### Direct Use
|
| 42 |
The primary use of this dataset is for training and evaluating **image shadow removal models**, particularly those aiming to handle complex lighting scenarios involving both direct and indirect shadows.
|
| 43 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 44 |
### Out-of-Scope Use
|
| 45 |
As a purely synthetic dataset, models trained solely on it may not generalize perfectly to real-world photographs without additional fine-tuning or domain adaptation techniques.
|
| 46 |
|
| 47 |
## Dataset Structure
|
| 48 |
|
| 49 |
### Data Instances
|
| 50 |
+
Data is organized into folders. A typical data instance comprises three image files:
|
| 51 |
+
- `direct_shadow.png`
|
| 52 |
+
- `indirect_shadow.png`
|
| 53 |
+
- `shadow_free.png`
|
| 54 |
|
| 55 |
### Data Fields
|
| 56 |
+
- `direct_shadow`: Path to the image file containing shadows from direct light.
|
| 57 |
+
- `indirect_shadow`: Path to the image file containing shadows from indirect light (e.g., light bouncing off surfaces).
|
| 58 |
+
- `shadow_free`: Path to the corresponding ground truth image with no shadows.
|
| 59 |
|
| 60 |
### Data Splits
|
| 61 |
+
The full dataset contains over 30,000 image pairs. For specific details on training, validation, and test splits, please refer to the original paper or the split files included with the dataset release.
|
| 62 |
|
| 63 |
## Dataset Creation
|
| 64 |
|
| 65 |
### Source Data
|
| 66 |
+
The dataset was generated through a custom rendering pipeline using two major 3D assets sources:
|
| 67 |
+
- **[3D-Front](https://tianchi.aliyun.com/specials/promotion/3dfront):** Used for generating indoor scene layouts.
|
| 68 |
+
- **[Objaverse](https://objaverse.allenai.org/):** Used to populate scenes with a diverse set of 3D objects.
|
| 69 |
|
| 70 |
### Annotations
|
| 71 |
+
The data is automatically generated. Shadow states (present/absent) are controlled by the rendering engine's lighting setup, so **no manual human annotation** was involved.
|
|
|
|
|
|
|
|
|
|
| 72 |
|
| 73 |
### Personal and Sensitive Information
|
| 74 |
+
This dataset consists entirely of **synthetic, virtual scenes**. It does not contain any portraits, personal information, or real-world sensitive data.
|
| 75 |
|
| 76 |
## Bias, Risks, and Limitations
|
| 77 |
|
| 78 |
+
- **Synthetic Domain Gap:** The primary limitation is its synthetic nature. Models may need adaptation to perform well on real-world images.
|
| 79 |
+
- **Scene and Object Bias:** The dataset's content is derived from 3D-Front and Objaverse, which may not represent the full diversity of real-world objects, scenes, and lighting conditions. This potential distribution bias should be considered when using the dataset.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 80 |
|
| 81 |
### Recommendations
|
| 82 |
Users are encouraged to:
|