pantankda commited on
Commit
a4ab3f4
·
verified ·
1 Parent(s): ddc0a70

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -15
README.md CHANGED
@@ -36,47 +36,66 @@ The dataset contains over **30,000 pairs** of images. Each pair consists of:
36
 
37
  It was rendered using the **3D-Front** and **Objaverse** 3D model libraries, covering a wide variety of indoor scenes, object types, and lighting conditions.
38
 
 
 
 
 
 
 
 
39
  ## Uses
40
 
41
  ### Direct Use
42
  The primary use of this dataset is for training and evaluating **image shadow removal models**, particularly those aiming to handle complex lighting scenarios involving both direct and indirect shadows.
43
 
 
 
 
 
 
44
  ### Out-of-Scope Use
45
  As a purely synthetic dataset, models trained solely on it may not generalize perfectly to real-world photographs without additional fine-tuning or domain adaptation techniques.
46
 
47
  ## Dataset Structure
48
 
49
  ### Data Instances
50
- Data is organized into folders. A typical data instance comprises three image files:
51
- - `direct_shadow.png`
52
- - `indirect_shadow.png`
53
- - `shadow_free.png`
54
 
55
  ### Data Fields
56
- - `direct_shadow`: Path to the image file containing shadows from direct light.
57
- - `indirect_shadow`: Path to the image file containing shadows from indirect light (e.g., light bouncing off surfaces).
58
- - `shadow_free`: Path to the corresponding ground truth image with no shadows.
59
 
60
  ### Data Splits
61
- The full dataset contains over 30,000 image pairs. For specific details on training, validation, and test splits, please refer to the original paper or the split files included with the dataset release.
62
 
63
  ## Dataset Creation
64
 
65
  ### Source Data
66
- The dataset was generated through a custom rendering pipeline using two major 3D assets sources:
67
- - **[3D-Front](https://tianchi.aliyun.com/specials/promotion/3dfront):** Used for generating indoor scene layouts.
68
- - **[Objaverse](https://objaverse.allenai.org/):** Used to populate scenes with a diverse set of 3D objects.
69
 
70
  ### Annotations
71
- The data is automatically generated. Shadow states (present/absent) are controlled by the rendering engine's lighting setup, so **no manual human annotation** was involved.
 
 
 
72
 
73
  ### Personal and Sensitive Information
74
- This dataset consists entirely of **synthetic, virtual scenes**. It does not contain any portraits, personal information, or real-world sensitive data.
75
 
76
  ## Bias, Risks, and Limitations
77
 
78
- - **Synthetic Domain Gap:** The primary limitation is its synthetic nature. Models may need adaptation to perform well on real-world images.
79
- - **Scene and Object Bias:** The dataset's content is derived from 3D-Front and Objaverse, which may not represent the full diversity of real-world objects, scenes, and lighting conditions. This potential distribution bias should be considered when using the dataset.
 
 
 
 
80
 
81
  ### Recommendations
82
  Users are encouraged to:
 
36
 
37
  It was rendered using the **3D-Front** and **Objaverse** 3D model libraries, covering a wide variety of indoor scenes, object types, and lighting conditions.
38
 
39
+ ### Method Overview
40
+
41
+ 下图展示了我们提出的 OmniSR 网络架构。对于每张输入 RGB 图像,我们首先提取其 DINO 特征、深度图和法向图。然后,RGB 图像和深度图被输入到阴影去除网络中。网络包含多个 Context-aware Swin Attention (CSA) 层,每层由两个 Swin self-attention 块组成。与传统 self-attention 不同,我们的块显式地融入了语义感知和几何感知的注意力权重。
42
+
43
+ ![OmniSR Network Architecture](images/network_architecture.png) <!-- 请替换为您的实际图片路径 -->
44
+ *图1: OmniSR 网络架构概述。输入包括 RGB、深度、DINO 特征和法向图,通过多尺度 CSA 层生成无阴影图像。*
45
+
46
  ## Uses
47
 
48
  ### Direct Use
49
  The primary use of this dataset is for training and evaluating **image shadow removal models**, particularly those aiming to handle complex lighting scenarios involving both direct and indirect shadows.
50
 
51
+ 我们在该数据集上训练了 OmniSR 模型,并与现有方法进行了对比。下图展示了在相同数据集上训练的不同方法的效果对比。可以看到,我们的方法在直接/间接阴影去除上都取得了更好的效果。
52
+
53
+ ![Comparison Results](images/comparison_results.png) <!-- 请替换为您的实际图片路径 -->
54
+ *图2: 对比结果。从上到下依次为:输入图像、ShadowFormer (在 ISTD+ 训练)、DMTN (在 ISTD+ 训练)、ShadowFormer (在我们的数据集训练)、DMTN (在我们的数据集训练)、Our results (在我们的数据集训练)。*
55
+
56
  ### Out-of-Scope Use
57
  As a purely synthetic dataset, models trained solely on it may not generalize perfectly to real-world photographs without additional fine-tuning or domain adaptation techniques.
58
 
59
  ## Dataset Structure
60
 
61
  ### Data Instances
62
+ 数据以图像文件的形式组织。下图展示了数据集中的一个典型实例,包含直接光照阴影图、间接光照阴影图和无阴影真值图。
63
+
64
+ ![Dataset Examples](images/dataset_examples.png) <!-- 请替换为您的实际图片路径 -->
65
+ *图3: 数据集示例。从左到右依次为:直接光照阴影、间接光照阴影、无阴影真值图像。*
66
 
67
  ### Data Fields
68
+ - `direct_shadow`: 直接光照阴影图像文件路径。
69
+ - `indirect_shadow`: 间接光照阴影图像文件路径。
70
+ - `shadow_free`: 无阴影的真值图像文件路径。
71
 
72
  ### Data Splits
73
+ 数据集包含超过 30,000 对图像。具体的训练集、验证集和测试集的划分细节,请参考论文原文或随数据集发布的划分文件。
74
 
75
  ## Dataset Creation
76
 
77
  ### Source Data
78
+ 该数据集是基于以下3D资产库通过渲染生成的:
79
+ - **3D-Front:** 用于生成室内场景布局。
80
+ - **Objaverse:** 用于提供多样化的3D物体模型,以丰富场景内容。
81
 
82
  ### Annotations
83
+ 数据本身是通过物理渲染引擎自动生成的。阴影状态由光照条件的设置自动控制。下图展示了阴影区域的概率分布,验证了我们渲染的阴影的真实性。
84
+
85
+ ![Shadow Probability](images/shadow_probability.png) <!-- 请替换为您的实际图片路径 -->
86
+ *图4: 阴影/无阴影概率图。左侧为阴影区域概率分布,右侧为无阴影区域概率分布。*
87
 
88
  ### Personal and Sensitive Information
89
+ 该数据集为合成的虚拟场景图像,不包含任何真实人物的肖像或个人信息。
90
 
91
  ## Bias, Risks, and Limitations
92
 
93
+ - **Synthetic Domain Gap:** 下图展示了真实拍摄图像与我们的合成数据集之间的差异。虽然我们的模型在合成数据上表现优异,但在真实图像上的泛化能力需要进一步验证。
94
+
95
+ ![Real vs Synthetic](images/real_vs_synthetic.png) <!-- 请替换为您的实际图片路径 -->
96
+ *图5: 真实图像与合成数据对比。从上到下依次为:真实拍摄图像、ShadowFormer (在 ISTD+ 训练)、DMTN (在 ISTD+ 训练)、ShadowFormer (在我们的数据集训练)、DMTN (在我们的数据集训练)、Our results (在我们的数据集训练)。*
97
+
98
+ - **Scene and Object Bias:** 数据集的场景和物体主要来源于 3D-Front 和 Objaverse,可能无法覆盖真实世界中所有可能的物体和光照场景。这种潜在的分布偏差在使用时应注意。
99
 
100
  ### Recommendations
101
  Users are encouraged to: