Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -36,47 +36,66 @@ The dataset contains over **30,000 pairs** of images. Each pair consists of:
|
|
| 36 |
|
| 37 |
It was rendered using the **3D-Front** and **Objaverse** 3D model libraries, covering a wide variety of indoor scenes, object types, and lighting conditions.
|
| 38 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 39 |
## Uses
|
| 40 |
|
| 41 |
### Direct Use
|
| 42 |
The primary use of this dataset is for training and evaluating **image shadow removal models**, particularly those aiming to handle complex lighting scenarios involving both direct and indirect shadows.
|
| 43 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 44 |
### Out-of-Scope Use
|
| 45 |
As a purely synthetic dataset, models trained solely on it may not generalize perfectly to real-world photographs without additional fine-tuning or domain adaptation techniques.
|
| 46 |
|
| 47 |
## Dataset Structure
|
| 48 |
|
| 49 |
### Data Instances
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
|
| 55 |
### Data Fields
|
| 56 |
-
- `direct_shadow`:
|
| 57 |
-
- `indirect_shadow`:
|
| 58 |
-
- `shadow_free`:
|
| 59 |
|
| 60 |
### Data Splits
|
| 61 |
-
|
| 62 |
|
| 63 |
## Dataset Creation
|
| 64 |
|
| 65 |
### Source Data
|
| 66 |
-
|
| 67 |
-
- **
|
| 68 |
-
- **
|
| 69 |
|
| 70 |
### Annotations
|
| 71 |
-
|
|
|
|
|
|
|
|
|
|
| 72 |
|
| 73 |
### Personal and Sensitive Information
|
| 74 |
-
|
| 75 |
|
| 76 |
## Bias, Risks, and Limitations
|
| 77 |
|
| 78 |
-
- **Synthetic Domain Gap:**
|
| 79 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 80 |
|
| 81 |
### Recommendations
|
| 82 |
Users are encouraged to:
|
|
|
|
| 36 |
|
| 37 |
It was rendered using the **3D-Front** and **Objaverse** 3D model libraries, covering a wide variety of indoor scenes, object types, and lighting conditions.
|
| 38 |
|
| 39 |
+
### Method Overview
|
| 40 |
+
|
| 41 |
+
下图展示了我们提出的 OmniSR 网络架构。对于每张输入 RGB 图像,我们首先提取其 DINO 特征、深度图和法向图。然后,RGB 图像和深度图被输入到阴影去除网络中。网络包含多个 Context-aware Swin Attention (CSA) 层,每层由两个 Swin self-attention 块组成。与传统 self-attention 不同,我们的块显式地融入了语义感知和几何感知的注意力权重。
|
| 42 |
+
|
| 43 |
+
 <!-- 请替换为您的实际图片路径 -->
|
| 44 |
+
*图1: OmniSR 网络架构概述。输入包括 RGB、深度、DINO 特征和法向图,通过多尺度 CSA 层生成无阴影图像。*
|
| 45 |
+
|
| 46 |
## Uses
|
| 47 |
|
| 48 |
### Direct Use
|
| 49 |
The primary use of this dataset is for training and evaluating **image shadow removal models**, particularly those aiming to handle complex lighting scenarios involving both direct and indirect shadows.
|
| 50 |
|
| 51 |
+
我们在该数据集上训练了 OmniSR 模型,并与现有方法进行了对比。下图展示了在相同数据集上训练的不同方法的效果对比。可以看到,我们的方法在直接/间接阴影去除上都取得了更好的效果。
|
| 52 |
+
|
| 53 |
+
 <!-- 请替换为您的实际图片路径 -->
|
| 54 |
+
*图2: 对比结果。从上到下依次为:输入图像、ShadowFormer (在 ISTD+ 训练)、DMTN (在 ISTD+ 训练)、ShadowFormer (在我们的数据集训练)、DMTN (在我们的数据集训练)、Our results (在我们的数据集训练)。*
|
| 55 |
+
|
| 56 |
### Out-of-Scope Use
|
| 57 |
As a purely synthetic dataset, models trained solely on it may not generalize perfectly to real-world photographs without additional fine-tuning or domain adaptation techniques.
|
| 58 |
|
| 59 |
## Dataset Structure
|
| 60 |
|
| 61 |
### Data Instances
|
| 62 |
+
数据以图像文件的形式组织。下图展示了数据集中的一个典型实例,包含直接光照阴影图、间接光照阴影图和无阴影真值图。
|
| 63 |
+
|
| 64 |
+
 <!-- 请替换为您的实际图片路径 -->
|
| 65 |
+
*图3: 数据集示例。从左到右依次为:直接光照阴影、间接光照阴影、无阴影真值图像。*
|
| 66 |
|
| 67 |
### Data Fields
|
| 68 |
+
- `direct_shadow`: 直接光照阴影图像文件路径。
|
| 69 |
+
- `indirect_shadow`: 间接光照阴影图像文件路径。
|
| 70 |
+
- `shadow_free`: 无阴影的真值图像文件路径。
|
| 71 |
|
| 72 |
### Data Splits
|
| 73 |
+
数据集包含超过 30,000 对图像。具体的训练集、验证集和测试集的划分细节,请参考论文原文或随数据集发布的划分文件。
|
| 74 |
|
| 75 |
## Dataset Creation
|
| 76 |
|
| 77 |
### Source Data
|
| 78 |
+
该数据集是基于以下3D资产库通过渲染生成的:
|
| 79 |
+
- **3D-Front:** 用于生成室内场景布局。
|
| 80 |
+
- **Objaverse:** 用于提供多样化的3D物体模型,以丰富场景内容。
|
| 81 |
|
| 82 |
### Annotations
|
| 83 |
+
数据本身是通过物理渲染引擎自动生成的。阴影状态由光照条件的设置自动控制。下图展示了阴影区域的概率分布,验证了我们渲染的阴影的真实性。
|
| 84 |
+
|
| 85 |
+
 <!-- 请替换为您的实际图片路径 -->
|
| 86 |
+
*图4: 阴影/无阴影概率图。左侧为阴影区域概率分布,右侧为无阴影区域概率分布。*
|
| 87 |
|
| 88 |
### Personal and Sensitive Information
|
| 89 |
+
该数据集为合成的虚拟场景图像,不包含任何真实人物的肖像或个人信息。
|
| 90 |
|
| 91 |
## Bias, Risks, and Limitations
|
| 92 |
|
| 93 |
+
- **Synthetic Domain Gap:** 下图展示了真实拍摄图像与我们的合成数据集之间的差异。虽然我们的模型在合成数据上表现优异,但在真实图像上的泛化能力需要进一步验证。
|
| 94 |
+
|
| 95 |
+
 <!-- 请替换为您的实际图片路径 -->
|
| 96 |
+
*图5: 真实图像与合成数据对比。从上到下依次为:真实拍摄图像、ShadowFormer (在 ISTD+ 训练)、DMTN (在 ISTD+ 训练)、ShadowFormer (在我们的数据集训练)、DMTN (在我们的数据集训练)、Our results (在我们的数据集训练)。*
|
| 97 |
+
|
| 98 |
+
- **Scene and Object Bias:** 数据集的场景和物体主要来源于 3D-Front 和 Objaverse,可能无法覆盖真实世界中所有可能的物体和光照场景。这种潜在的分布偏差在使用时应注意。
|
| 99 |
|
| 100 |
### Recommendations
|
| 101 |
Users are encouraged to:
|