diff --git "a/title_30K/test_title_long_2404.16301v1.json" "b/title_30K/test_title_long_2404.16301v1.json" new file mode 100644--- /dev/null +++ "b/title_30K/test_title_long_2404.16301v1.json" @@ -0,0 +1,99 @@ +{ + "url": "http://arxiv.org/abs/2404.16301v1", + "title": "Style Adaptation for Domain-adaptive Semantic Segmentation", + "abstract": "Unsupervised Domain Adaptation (UDA) refers to the method that utilizes\nannotated source domain data and unlabeled target domain data to train a model\ncapable of generalizing to the target domain data. Domain discrepancy leads to\na significant decrease in the performance of general network models trained on\nthe source domain data when applied to the target domain. We introduce a\nstraightforward approach to mitigate the domain discrepancy, which necessitates\nno additional parameter calculations and seamlessly integrates with\nself-training-based UDA methods. Through the transfer of the target domain\nstyle to the source domain in the latent feature space, the model is trained to\nprioritize the target domain style during the decision-making process. We\ntackle the problem at both the image-level and shallow feature map level by\ntransferring the style information from the target domain to the source domain\ndata. As a result, we obtain a model that exhibits superior performance on the\ntarget domain. Our method yields remarkable enhancements in the\nstate-of-the-art performance for synthetic-to-real UDA tasks. For example, our\nproposed method attains a noteworthy UDA performance of 76.93 mIoU on the\nGTA->Cityscapes dataset, representing a notable improvement of +1.03 percentage\npoints over the previous state-of-the-art results.", + "authors": "Ting Li, Jianshu Chao, Deyu An", + "published": "2024-04-25", + "updated": "2024-04-25", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Semantic AND Segmentation AND Image", + "gt": "Style Adaptation for Domain-adaptive Semantic Segmentation", + "main_content": "INTRODUCTION Neural Networks [1] and Transformers [2] have achieved great success in semantic segmentation tasks, but supervised tasks typically require a large amount of annotated data. Pixel-level annotation is needed, with at least an hour for each image [3], which significantly increases the cost. One approach to address this problem is to utilize existing annotated data or easily obtainable synthetic data to train models and test them on target data. However, due to domain differences, the model\u2019s performance metrics often decline substantially when tested on target data. In order to obtain a more robust model, researchers have proposed UDA methods [4][5][6], transferring knowledge from annotated source domain data to unannotated target data. It has been proven that CNNs are sensitive to distribution shifts [7] in image classification. Recent studies [8] have shown that Transformers are more robust compared to these factors. In addition, CNNs mainly focus on texture [9], while Transformers emphasize shape, which is more similar to human vision. Some researches have revealed significant differences between the induction bias of standard CNNs and human vision: humans primarily rely on object content (i.e., shape) for recognition [10], while CNNs exhibit a strong preference for style (i.e., texture) [9]. This explains why CNNs are more susceptible to changes when switching between domains, as image style is more likely to vary across different domains. Early studies [11][12][13] have confirmed that feature distribution shifts caused by style differences mainly occur in the shallow layers of the network. This implies that the shallow layers\u2019 feature distribution in the network can reflect the style information of the input images. Therefore, following these works\u2019 methods, we manipulate the style features of the feature maps in the shallow layers of the network. The feature extractor captures the style features of the target domain while preserving the content of the source domain. This approach weakens the style features of the source domain while enhancing the style features of the target domain, achieving style feature transfer. 2. METHOD 2.1. Image to Image Domain Adaptation In UDA, we are given a source dataset as Ds = {(xs i, ys i )}Ns i=1 (1) where Ns is the number of the color images in the dataset, and ys \u2208RH\u00d7W represents the associated semantic map of xs \u2208RH\u00d7W \u00d73. Similarly, Dt = {xt i}Nt i=1 (2) arXiv:2404.16301v1 [cs.CV] 25 Apr 2024 \fis the target dataset where true semantic labels are missing. Typically, segmentation networks trained on Ds exhibit performance degradation when tested on Dt. Here, we use Fourier Domain Adaptation (FDA) [14] and RGB adaptation to reduce the domain gap between the two datasets at the image-level. FDA aims to minimize domain differences by replacing the low-frequency components in the target domain with those from the source domain. This is because low-frequency components can be inferred as the domain style. FDA has achieved significant improvements in semantic segmentation. Therefore, we employ the FDA method for data augmentation, as expressed by the formula: xs\u2192t = F\u22121([\u03b2\u25e6FA(xt)+(1\u2212\u03b2)\u25e6FA(xs), FP (xs)]) (3) The variables FA and FP denote the amplitude and phase components of the Fourier transform, respectively. In the inverse Fourier transform, the phase and amplitude components are remapped to the image space. The hyperparameter \u03b2 determines the filter\u2019s size in the inverse Fourier transform. Random RGB shift is a prevalent and widely adopted technique for data augmentation. Through our experimental observations, we fortuitously discovered that employing random RGB shift as a data augmentation technique significantly enhances the model\u2019s performance. Our hypothesis is that the image-level implementation of random RGB shift enables a closer resemblance between the style of the source and target domains, thereby mitigating the domain gap. Building upon the concept of random RGB shift, we introduce a RGB adaptation method as a solution for domain adaptation. The mean value of each channel is calculated for RGB images x as follows: \u00b5(x) = 1 HW H X h=1 W X w=1 xhw (4) xs\u2192t = xs + (\u00b5(xt) \u2212\u00b5(xs)) (5) The variables \u00b5(s) and \u00b5(t) represent the mean values of the source domain image and the target domain image, respectively, along the channel dimension. By employing this method, the content of the source domain image remains unaltered, thus preserving the availability of accurate labels. Additionally, it facilitates the closer alignment of the source domain image with the target domain image within the RGB space. 2.2. Style Adaptive Instance Normalization In UDA methods, the primary factor causing domain shift is the disparity in styles across domains. The presence of domain shift constrains the models\u2019 capacity for generalization in both domain adaptation and domain generalization tasks. Previous studies have demonstrated that shallow features extracted by backbone networks possess the capability to capture style information in images. Established approaches typically characterize the style features of an image by computing the mean and standard deviation along the channel dimension of shallow features. \u03c3(x) = v u u t 1 HW H X h=1 W X w=1 (xhw \u2212\u00b5(x))2 + \u03f5 (6) Conventional instance normalization can eliminate specific stylistic information from an image. Directly applying this method to UDA can diminish the network\u2019s capacity to learn the style information of the source domain images. However, it also disregards the style information of the target domain, resulting in diminished performance and limited generalization ability on the target domain. To decrease the network\u2019s ability to learn style information from the source domain images while enhancing the style information of the target domain images, we apply AdaIN [12] to replace the style information of the source domain images with that of the target domain images. Meanwhile, this method retains the content information of the source domain images. We term the proposed approach as Style Adaptive Instance Normalization (SAIN). The specific implementation formula is as follows: SAIN(xs, xt) = \u03c3(xt) \u0012xs \u2212\u00b5(xs) \u03c3(xs) \u0013 + \u00b5(xt) (7) \u00b5 and \u03c3 represent the mean and standard deviation of the feature map in the channel dimension, respectively. By transferring the style of the target domain to the source domain during the training process, the network g\u03b8 biased towards content no longer relies on the style of the source domain to make decisions but focuses more on content while also paying attention to the style of the target domain. During testing, we directly use network g\u03b8 without SAIN to ensure the independence of predictions and reduce computational burden. Therefore, we replace the original loss function with a content-biased loss, shown as follows: LS i = \u2212 H\u00d7W X j=1 C X c=1 yS (i,j) log SAIN \u0010 g\u03b8(xS i )(j,c), g\u03b8(xT i )(j,c)\u0011 (8) Furthermore, we follow the consistency training in DAFormer, which involves training the teacher network on augmented target data using DACS [15], while the teacher model generates pseudo-labels using non-augmented target images. 3. EXPERIMENTS 3.1. Implementation Details The proposed method is applied to two challenging unsupervised domain adaptation tasks, where there are abundant se\fmantic segmentation labels in the synthetic domain (source domain), but not in the real domain (target domain). The two synthetic datasets used are GTA5 [16] and SYNTHIA [17], while the real domain dataset is CityScapes [3]. The proposed method is validated based on the DAFormer network and the Mix Transformer-B5 encoder [18]. All backbone networks are pretrained on ImageNet. In the default UDA setting, the MIC [6] masked image self-training strategy and the training parameters are used, including the AdamW optimizer, the encoder learning rate of 6 \u00d7 10\u22125, the decoder learning rate of 6 \u00d7 10\u22124, 60k training iterations, a batch size of 2, linear learning rate warm-up, and DACS [15] data augmentation. 3.2. Evaluation First, we integrate RGB adaptation with several significant UDA methods, including DAFormer [4], HRDA [5] and MIC [6], using the DAFormer framework. Table 1 demonstrates that RGB adaptation achieves notable improvement compared to previous UDA methods without RGB adaptation. Karras et al. [19] demonstrated that styles at different levels encode distinct visual attributes. Styles from fine-grained spatial resolution (lower levels in our network) encode lowlevel attributes like color and fine textures, whereas styles from coarse-grained spatial resolution (higher levels in our network) encode high-level attributes including global structure and textures. Therefore, the application of our SAIN module at the appropriate level is necessary to mitigate adverse style-induced biases. The networks from Block 1 to Block 4 become increasingly deeper. Figure 1 illustrates that the most notable improvement is achieved when applying SAIN in Block 3. However, applying SAIN to features at excessively low levels only has a limited impact on reducing feature biases. Additionally, using SAIN in excessively high-level styles may result in the loss of essential semantic information. Through our experimental findings, we discovered that the concurrent application of SAIN to both Block 2 and Block 3 results in optimal performance. Visual comparisons are conducted with the second performer (i.e., MIC), which utilizes the same segmentation network backbone as ours. Figure 2 illustrates that our model\u2019s prediction results demonstrate higher accuracy. Additionally, our approach demonstrates strong performance on some common categories, including the first row with the terrain, wall in the second row and building in the third and truck in fourth rows. We attribute this phenomenon to the transferability of RGB adaptation and SAIN, which enables the model to learn more style information from the target domain. 3.3. Influence of Style on UDA In the following, we analyze the underlying principles of our method on GTA\u2192Cityscapes. Firstly, we analyze the impact Table 1. Performance (IoU) of RGB adaptation with different UDA methods on GTA\u2192Cityscapes. Network UDA Method w/o RGB Adapt. w/ RGB Adapt. DAFormer DAFormer 68.3 69.37 DAFormer HRDA 73.8 74.45 DAFormer MIC 75.9 76.64 Fig. 1. The effect of SAIN on different blocks. of SAIN on UDA at various feature levels. As shown in Figure 1, as the network depth increases from Block 1 to Block 3, the improvement in the performance of UDA using SAIN also increases accordingly. The results in Table 2 and Table 3 demonstrate significant performance improvements across all benchmarks. In particular, our method has led to a +1.03 increase in mIoU for GTA\u2192CS and a +1.05 increase for Synthia\u2192CS. For most categories, such as building, fence, rider, truck, and train, there is a certain performance improvement. However, there are also some categories that have a slight performance decrease after using SAIN, such as bike. This may be due to the difference in annotation strategies for the bike category between the Cityscapes dataset and the GTA dataset. 4. CONCLUSION We have proposed a straightforward method for reducing domain discrepancy, which requires no additional learning and can be seamlessly integrated into self-supervised UDA. By transferring the target domain style to the source domain within the latent feature space, the model is trained to prioritize the style of the target domain during its decision-making process. Our experiments validate the remarkable performance enhancements achieved by our proposed method in Transformer-based domain adaptation. Despite its simplicity, the results indicate that our method actually surpasses the current state-of-the-art techniques. This suggests that the distributional misalignment caused by shallow-level statistics can indeed impact cross-domain generalization, but it can be \fTable 2. Semantic segmentation performance (IoU) on GTA\u2192Cityscapes Method Road S.walk Build. Wall Fence Pole Tr.light Tr.sign Veget. Terrain Sky Person Rider Car Truck Bus Train M.bike Bike mIoU ADVENT 89.4 33.1 81.0 26.6 26.8 27.2 33.5 24.7 83.9 36.7 78.8 58.7 30.5 84.8 38.5 44.5 1.7 31.6 32.4 45.5 DACS 89.9 39.7 87.9 30.7 39.5 38.5 46.4 52.8 88.0 44.0 88.8 67.2 35.8 84.5 45.7 50.2 0.0 27.3 34.0 52.1 ProDA 87.8 56.0 79.7 46.3 44.8 45.6 53.5 53.5 88.6 45.2 82.1 70.7 39.2 88.8 45.5 59.4 1.0 48.9 56.4 57.5 DAFormer 95.7 70.2 89.4 53.5 48.1 49.6 55.8 59.4 89.9 47.9 92.5 72.2 44.7 92.3 74.5 78.2 65.1 55.9 61.8 68.3 HRDA 96.4 74.4 91.0 61.6 51.5 57.1 63.9 69.3 91.3 48.4 94.2 79.0 52.9 93.9 84.1 85.7 75.9 63.9 67.5 73.8 MIC 97.4 80.1 91.7 61.2 56.9 59.7 66.0 71.3 91.7 51.4 94.3 79.8 56.1 94.6 85.4 90.3 80.4 64.5 68.5 75.9 Ours 97.24 79.12 92.15 61.45 58.5 60.98 69.23 72.58 91.93 53.33 93.99 81.26 60.68 94.84 88.3 90.5 83.24 65.59 66.82 76.93 Table 3. Semantic segmentation performance (IoU) on Synthia\u2192Cityscapes. Method Road S.walk Build. Wall Fence Pole Tr.Light Tr.Sign Veget. Terrain Sky Person Rider Car Truck Bus Train M.bike Bike mIoU ADVENT 85.6 42.2 79.7 8.7 0.4 25.9 5.4 8.1 80.4 \u2013 84.1 57.9 23.8 73.3 \u2013 36.4 \u2013 14.2 33.0 41.2 DACS 80.6 25.1 81.9 21.5 2.9 37.2 22.7 24.0 83.7 \u2013 90.8 67.6 38.3 82.9 \u2013 38.9 \u2013 28.5 47.6 48.3 ProDA 87.8 45.7 84.6 37.1 0.6 44.0 54.6 37.0 88.1 \u2013 84.4 74.2 24.3 88.2 \u2013 51.1 \u2013 40.5 45.6 55.5 DAFormer 84.5 40.7 88.4 41.5 6.5 50.0 55.0 54.6 86.0 \u2013 89.8 73.2 48.2 87.2 \u2013 53.2 \u2013 53.9 61.7 60.9 HRDA 85.2 47.7 88.8 49.5 4.8 57.2 65.7 60.9 85.3 \u2013 92.9 79.4 52.8 89.0 \u2013 64.7 \u2013 63.9 64.9 65.8 MIC 86.6 50.5 89.3 47.9 7.8 59.4 66.7 63.4 87.1 \u2013 94.6 81.0 58.9 90.1 \u2013 61.9 \u2013 67.1 64.3 67.3 Ours 89.06 57.39 90.1 51.37 7.99 60.53 69.03 63.44 86.57 \u2013 94.91 82.33 61.1 89.4 \u2013 57.28 \u2013 67.92 65.24 68.35 Fig. 2. Qualitative comparison with the previous state-of-the-art method MIC on GTA\u2192CS. The proposed method gets better segmentation for classes such as terrain, fence, building, and truck. mitigated through image translation and SAIN. The issue of model robustness in machine learning remains a challenging problem, and while we do not assert that our method is optimal, its simplicity may also yield performance improvements in other domain adaptation tasks. Acknowledgements: This work is supported by STS Project of Fujian Science and Technology Program (No. 2023T3042). \f5.", + "additional_info": [ + { + "url": "http://arxiv.org/abs/2404.06863v1", + "title": "RESSCAL3D: Resolution Scalable 3D Semantic Segmentation of Point Clouds", + "abstract": "While deep learning-based methods have demonstrated outstanding results in\nnumerous domains, some important functionalities are missing. Resolution\nscalability is one of them. In this work, we introduce a novel architecture,\ndubbed RESSCAL3D, providing resolution-scalable 3D semantic segmentation of\npoint clouds. In contrast to existing works, the proposed method does not\nrequire the whole point cloud to be available to start inference. Once a\nlow-resolution version of the input point cloud is available, first semantic\npredictions can be generated in an extremely fast manner. This enables early\ndecision-making in subsequent processing steps. As additional points become\navailable, these are processed in parallel. To improve performance, features\nfrom previously computed scales are employed as prior knowledge at the current\nscale. Our experiments show that RESSCAL3D is 31-62% faster than the\nnon-scalable baseline while keeping a limited impact on performance. To the\nbest of our knowledge, the proposed method is the first to propose a\nresolution-scalable approach for 3D semantic segmentation of point clouds based\non deep learning.", + "authors": "Remco Royen, Adrian Munteanu", + "published": "2024-04-10", + "updated": "2024-04-10", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Semantic AND Segmentation AND Image", + "gt": "RESSCAL3D: Resolution Scalable 3D Semantic Segmentation of Point Clouds", + "main_content": "INTRODUCTION In recent years, deep learning has shown great potential in different domains such as compression [1, 2], 6D pose estimation [3, 4] and semantic segmentation [5, 6]. While most papers focus on pure performance and are able to outperform traditional methods significantly, less attention has been given to practical features. One such feature is scalability. This work is funded by Fonds Wetenschappelijk Onderzoek (FWO) 1S89420N and Innoviris within the research project SPECTRE. \u00a9 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. DOI: 10.1109/ICIP49359.2023.10222338 Scalability is a broad term that can be applied on different aspects of deep learning, leading to different subdomains. In [7, 8], techniques are proposed allowing the selection of the model complexity at runtime depending on the available computing resources, thus achieving complexity scalability. In [9], a novel layer, called MaskLayer, is proposed that provides quality scalability with applications presented in compression and semantic hashing. The domain of resolution scalability allows operating at different resolutions, dependent on the application or available data. It has proven to be an important feature in traditional compression algorithms [10, 11, 12] and, more recently, in a point cloud geometry codec [13]. While all of these methods provide various scalability functionalities, full-resolution point cloud data is required to be available at the start of inference as input for these methods. Consequently, existing methods are not able to handle varying spatial resolutions of the input point cloud. In addition, existing methods are not able to progressively process additional points in the input point cloud as they become available over time. The recent advent of scalable 3D acquisition devices [14, 15] enables the acquisition of point clouds of which the densities increase progressively over time. Such resolution-scalable 3D scanning devices generate a low spatial resolution of the scene with extremely small latency, and progressively increase the resolution of the acquired point cloud over time. An important advantage of this new 3D scanning paradigm is that it enables processing the sparse point cloud while higher resolutions are captured. Once new points are captured, the results are refined. In this work, we propose a novel method, dubbed RESSCAL3D, that allows processing the point cloud data in a resolution-scalable manner. It allows processing low resolution 3D point clouds while higher resolutions are still being captured by the scanning device. When extra points become available, instead of restarting processing for all points, leading to large delays, the proposed method processes only the new points. To improve performance, processing of any given spatial resolution employs the information obtained at the lower spatial resolutions as prior information. This reduces the processing time. Another advantage is that early arXiv:2404.06863v1 [cs.CV] 10 Apr 2024 \fFig. 1: The RESSCAL3D architecture. The grey circle with \u2019C\u2019 stands for concatenation. Fig. 2: RESSCAL3D fusion module decision-making is enabled as intermediate predictions on the lower resolutions are retrieved very fast. To evaluate the proposed architecture, semantic segmentation was chosen as target application. 3D scene understanding is of critical importance for many application domains, such as virtual reality, autonomous driving, and robotics, where timing is crucial. To this end, a fundamental component is 3D semantic segmentation [6, 16, 17, 18, 5, 19]. We highlight PointTransformer [5] which yields state of the art results by using the Transformer architecture for this task. Summarized, our main contributions are as follows: \u2022 The first deep learning-based approach, to the best of our knowledge, that provides resolution scalable 3D semantic segmentation \u2022 A fusion module that fuses features from different resolution levels \u2022 An experimental analysis on S3DIS. While minimizing the cost of scalability, RESSCAL3D is 31-62% faster than the non-scalable baseline at the highest spatial resolution. Additionally, intermediate results are generated, the fastest after only 6% of the total inference time of the baseline. The paper is structured as follows: Sec. 2 introduces the proposed approach. Sec. 3 and Sec. 4 present our experimental results and ablation study, respectively. Finally, Sec. 5 concludes this work. 2. PROPOSED METHOD Overview of the proposed method. The RESSCAL3D architecture is illustrated in Fig. 1. To retrieve the multiresolution data, the complete input sample X \u2208RN\u00d7C, with N and C the number of points and channels, respectively, is subsampled in s different, non-overlapping partitions. We will denote these partitions as Xi \u2208RNi\u00d7C with i \u2208[1, s] and N1 < ... < Ns < N. The employed subsampling method is described in Sec. 3. Firstly, the partition with the lowest resolution, X1, is processed by a PointTransformer [5], resulting in a prediction Y 1 \u2208RN1. As N1 << N, the computational complexity of this first scale is low and a fast prediction can be obtained. The second scale receives as input X2, which is processed by another PointTransformer encoder to produce the features \u03b12 \u2208RN \u2032 2\u00d7F , with N \u2032 2 and F the number of subsampled points by the encoder and features, respectively. In order to improve performance, those features are fused with the already computed features of lower scales by a fusion module. The resulting multi-resolution features \u03b1f 2 are employed by the decoder to obtain Y 2. At higher scales, the input of the fusion module is the concatenation of the fused features of previous scales. Once all scales are processed, Y = {Y 1, Y 2, . . . , Y s} \u2208RN is obtained. Regarding computational complexity, the presented approach has a benefit over handling all the data at once. Since PointTransformer uses an attention mechanism that requires the computation of the K-Nearest Neighbors (KNN), the complexity of processing the input as a whole can be expressed as: O(N 2) = O((N1 + ... + Ns)2) = O(N 2 1 + ... + N 2 s + \f2 Ps k=1 Ps p=1,p\u0338=k NkNp), with N = N1 + ... + Ns, and considering s scales. With RESSCAL3D, the attention mechanism is applied in parallel on the partitions, leading to complexity of order O(N 2 1 + ... + N 2 s ). Compared to the non-scalable approach, RESSCAL3D substantially lowers complexity with a factor proportional to: s X k=1 s X p=1,p\u0338=k NkNp. (1) With a larger s, this effect becomes more pronounced as the partition sizes become smaller. It should be noted that sequential processing of scales also brings some computational redundancy, though the KNN is the most computational expensive operation. For large N and a large amount of scales, the effect of the double product elimination becomes significant. Experimental validation of the introduced concepts is further reported in Sec. 3. Fusion Module. Let \u03b1c i\u22121 \u2208RN \u2032 j\u00d7F be the concatenated features from the lower scales with N \u2032 j the number of concatenated points in feature space. Given \u03b1c i\u22121 and \u03b1i, the fusion module combines the multi-scale information into a single feature matrix which is used for decoding. The fusion architecture is depicted in Fig. 2. In more detail, the fusion module firstly retrieves the relevant features from the previous scales. This is done with a KNN algorithm on the points associated to the features in \u03b1i. In other words, for each feature vector in \u03b1i, the features of the K nearest neighbors in \u03b1c i\u22121 are utilized. As these features are originating from different resolution scales, the acquired feature matrices contain multi-resolution information. In a next step, these neighborhoods are processed by a Conv1D, followed by a MaxPool layer. After concatenation with the original scale features, \u03b1i, a fully-connected layer encodes the information back to the original feature size. Training. RESSCAL3D is trained scale by scale. All weights from previous scales are freezed while training an extra scale and the loss-function is computed only on the results from the current scale. This allows the PointTransformer backbone to achieve maximal results for each resolution. 3. EXPERIMENTS Dataset and Evaluation metrics. The Stanford 3D Indoor Scene dataset (S3DIS) [20] consists of 6 large-scale indoor areas with in total 271 rooms. Each point has been annotated with one of the 13 semantic categories. Area-5 has been captured in a different building than the other areas and is therefore often selected as test set [5, 18, 19]. As evaluation metrics, the mean intersection over union (mIoU), mean accuracy (mAcc) and overall accuracy (oAcc) are being used. All presented results are averaged over the Area-5 testset. Implementation details. PointTransformer [5] has been selected as backbone architecture as it achieves state of the art Fig. 3: Ablation study and comparison of the scalable RESSCAL3D with the non-scalable baseline performance and has official, publicly available code. Each scale was trained for 34 epochs with a batch size of 4. Other training and network parameters are the same as in [5]. Each input point is represented by a 6-dimensional vector: xyzrgb. To obtain the multi-resolution data, X is voxelized s times with s different voxel sizes. Subsequently, one point per voxel is randomly selected while making sure a point is not present in multiple partitions. More specifically, we have opted to employ 4 scales with voxel sizes [0.16, 0.12, 0.08, 0.06]. S3DIS semantic segmentation. RESSCAL3D is, to the best of our knowledge, the first method that performs resolutionscalable 3D semantic segmentation of point clouds. Consequently, no quantitative or qualitative comparison with existing methods can be made. Nevertheless, in order to characterize its performance and inference time, we compare the proposed method with the non-scalable baseline, which employs the same semantic segmentation backbone for the different scales. Our scalable approach processes the additional Fig. 4: Comparison of RESSCAL3D with the non-scalable baseline in inference time. The actual inference latency is bounded to the yellow zone. The displayed non-scalable baseline timing results are not cumulative. \f(a) Ground truth (b) Scale 0 Time: 31.1 ms (c) Scale 1 Time: 42.9 ms (d) Scale 3 Time: 201 ms Fig. 5: Visualization of S3DIS results for RESSCAL3D. The input data and semantic prediction are visualized on the top and bottom row, respectively. Non-cumulative time is used. points at each scale, using side information from the previous scales, the baseline processes the whole point cloud at that scale. Thus, the latter can only be launched when all data is available and does not process data in a progressive manner. Also, no intermediate results are obtained. The results in terms of mIoU of our scalable approach, with and without the fusion module, and the non-scalable baseline are shown in Fig. 3. At the first scale, all methods operate in an identical manner and thus, achieve equal performance. At higher scales, using the proposed fusion module reduces the performance gap between the scalable approach and non-scalable baseline. At the highest scale, the performance gap in mIoU is only 2.1% of the total performance. Although the proposed method is not able to achieve the same performance as the baseline at the highest scale, the resulting difference is deemed small. On the other hand, the scalable approach presents an important advantage in inference time. In Fig. 4, the inference time can be compared. Important to note is that the latency invoked by RESSCAL3D depends on the data availability. Since the scalable approach can start processing lower resolutions while higher resolutions are being acquired, it utilizes the otherwise lost acquisition time, while the non-scalable baseline can only start once all point cloud data is available. Therefore we have opted to present the upper and lower bounds of the induced latency by RESSCAL3D. When operating on the upper bound, all data is available at the start and all inference timings are cumulated. For the latter, the processing of the previous scale is finished before the point cloud data for the current resolution becomes available. Therefore, the latency introduced by RESSCAL3D will be in the yellow zone (see Fig. 4) and is mainly lower than the baseline. Important to note is that even if operating on the upper bound, RESSCAL3D is able to attain a 31% decrease in inference time at the highest scale with respect to the Scale Method Performance Time (ms) oAcc mAcc mIoU 0 Without Fusion 85.7 67.9 59.8 31.1 Fusion 85.7 67.9 59.8 31.1 1 Without Fusion 86.5 69.6 61.8 73.8 Fusion 87.0 70.0 62.5 73.9 3 Without Fusion 87.0 70.5 62.7 167 Fusion 87.6 71.2 64.1 167 4 Without Fusion 87.8 72.4 64.8 368 Fusion 88.5 73.0 66.0 368 Table 1: Ablation of fusion module. Cumulative time is used. baseline. The main reason is the reduced complexity of the attention modules as explained in Sec. 2. In the lower bound case, RESSCAL3D achieves an impressive 61% decrease in inference time. When operating with higher number of points, the gain will become even more pronounced (Eq. (1)). Qualitative results are presented in Fig. 5. Overall, one can notice very accurate segmentation, with some errors on the lower scales corrected at the higher scales. An example is the erroneous dark segmentation on the bookcase on the right. 4. ABLATION STUDY In this section, the effect and value of our fusion module is analysed. The removal of the fusion module leads to the loss of multi-resolution processing and scales which are processed independently. In Fig. 3 and Tab. 1 is shown that employing the fusion module consistently leads to better results. The added inference time is negligible. \f5. CONCLUSION In this paper, we propose RESSCAL3D, a novel architecture allowing resolution scalable 3D semantic segmentation of point clouds. The experiments show that our scale-byscale approach allows significantly faster inference while maintaining a limited impact on performance relative to the non-scalable baseline. 6." + }, + { + "url": "http://arxiv.org/abs/2312.09256v1", + "title": "LIME: Localized Image Editing via Attention Regularization in Diffusion Models", + "abstract": "Diffusion models (DMs) have gained prominence due to their ability to\ngenerate high-quality, varied images, with recent advancements in text-to-image\ngeneration. The research focus is now shifting towards the controllability of\nDMs. A significant challenge within this domain is localized editing, where\nspecific areas of an image are modified without affecting the rest of the\ncontent. This paper introduces LIME for localized image editing in diffusion\nmodels that do not require user-specified regions of interest (RoI) or\nadditional text input. Our method employs features from pre-trained methods and\na simple clustering technique to obtain precise semantic segmentation maps.\nThen, by leveraging cross-attention maps, it refines these segments for\nlocalized edits. Finally, we propose a novel cross-attention regularization\ntechnique that penalizes unrelated cross-attention scores in the RoI during the\ndenoising steps, ensuring localized edits. Our approach, without re-training\nand fine-tuning, consistently improves the performance of existing methods in\nvarious editing benchmarks.", + "authors": "Enis Simsar, Alessio Tonioni, Yongqin Xian, Thomas Hofmann, Federico Tombari", + "published": "2023-12-14", + "updated": "2023-12-14", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Semantic AND Segmentation AND Image", + "gt": "LIME: Localized Image Editing via Attention Regularization in Diffusion Models", + "main_content": "Introduction Diffusion models (DMs) have recently achieved remarkable success in generating images that are not only high-quality but also richly varied, thanks to advancements in text-toimage conversion [19, 36, 38, 40]. Beyond their generative capabilities, there is a growing research interest in the controllability aspect of these models [2, 6, 8, 17, 33, 53]. This has led to the exploration of a variety of editing techniques, leveraging the power of DMs for tasks such as personalized image creation [14, 39, 47], context-aware inpainting 1 arXiv:2312.09256v1 [cs.CV] 14 Dec 2023 \f[26, 31, 50], and image transformation in response to textual edits [2, 6, 8, 17, 21, 27]. These developments underscore the versatility of DMs and their potential to serve as foundational tools for various image editing applications. In this paper, we address the task of text-guided image editing, explicitly focusing on localized editing, which refers to identifying and modifying any region of interest in an image. This is done regardless of its size and based on textual instructions while preserving the context of the surrounding regions. The difficulty arises from the intertwined nature of image representations within these models, where changes intended for one area can inadvertently affect others [6, 17, 27, 53]. Existing methods often depend on additional user input, such as masking the target area, i.e., Region of Interest (RoI), or providing additional text information, e.g., objects of interest, to pinpoint the editing region [2, 8]. However, these approaches introduce complexity and do not guarantee the precision necessary for seamless editing. Figure 1 highlights localized edits without altering the overall image, a balance that current methods have not yet struck. Advancing localized editing to be more intuitive and effective remains a pivotal direction. We address the challenge of localized image editing by introducing LIME, that leverages pre-trained InstructPix2Pix [6] without the need for additional supervision, user inputs, or model re-training/fine-tuning. Recent studies [34, 44, 49] have demonstrated that diffusion models are capable of encoding semantics within their intermediate features. LIME utilizes those features to identify segments, then extracts RoI by harnessing attention scores derived from instructions. Other research [1, 7] has shown the significant impact of attention-based guidance on the composition of an image. Accordingly, LIME aims to restrict the scope of edits by regularizing attention scores to enable disentangled and localized edits. By improving these two lines of work, LIME not only offers more effective localized editing as shown in Fig. 1 but also demonstrates a notable advancement by quantitatively outperforming current stateof-the-art methods on four different benchmark datasets. Our pipeline contains two steps. It first finds semantic segments of the input image. This is achieved based on semantic information encoded in intermediate features. Then, we identify the area to be edited by combining the segments with large cross-attention scores toward the edit instruction. Once we isolate the area to be edited, i.e., RoI, the proposed attention regularization technique is applied to the text tokens to selectively target the RoI to ensure that subsequent editing is accurately focused, avoiding unintended changes to other parts of the image. This two-step approach, first refining targeted areas and then editing within the RoI, ensures that our modifications are accurate and contextually coherent, simplifying the editing process while avoiding unintended alterations to the rest of the image. The core contributions of this study are: \u2022 We introduce a localized image editing technique that eliminates the need for fine-tuning or re-training, ensuring efficient and precise localized edits. \u2022 Our approach leverages the pre-trained model\u2019s intermediate features to segment the image and to identify the regions where modifications will be applied. \u2022 An attention regularization strategy is proposed, which is employed to achieve disentangled and localized edits within the RoI, ensuring contextually coherent edits. The experimental evaluation demonstrates that our approach outperforms existing methods in localized editing both qualitatively and quantitatively on four benchmark datasets [5, 6, 20, 52]. 2. Related Work Text-guided image generation. Text-to-image synthesis significantly advanced thanks to diffusion models that surpassed prior generative adversarial networks (GANs) [16, 37, 51]. Key developments [10, 19, 43] have resulted in diffusion models that generate highly realistic images from textual inputs [31, 36, 40]. Notably, the introduction of latent diffusion models has significantly increased the computational efficiency of previous methods [38]. Image editing with Diffusion Models. One direction for image editing is utilizing pre-trained diffusion models by first inverting the input image in the latent space and then applying the desired edit by altering the text prompt [8, 17, 20, 27, 30, 32, 45, 46, 48]. For instance, DirectInversion [20] inverts the input image and then applies Prompt2Prompt [17] to obtain the desired edit, but it may lose details of the input image during inversion. DiffEdit [8], on the other hand, matches the differences in predictions for input and output captions to localize the edit yet struggles with complex instructions. It works in the noise space to edit. Another direction for image editing by using instructions is training diffusion models on triplet data, which contains input image, instruction, and desired image [6, 13, 52, 53]. The latest approach, InstructPix2Pix (IP2P) [6] uses a triplet dataset to train a model for editing images by using instructions. It performs better than previous methods but sometimes generates entangled edits. To tackle this problem, HIVE [53] relies on human feedback on edited images to learn what users generally prefer and uses this information to fine-tune IP2P, aiming to align more closely with human expectations. Alternatively, our method leverages the pre-trained IP2P to localize the edit instruction. Then, instead of manipulating the noise space [2, 8, 29], our method employs attention regularization to achieve localized editing, ensuring the edits are restricted within the RoI. The entire process is done without needing additional data, re-training, or fine-tuning. 2 \fSemantics in Diffusion Models. Intermediate features of diffusion models, as explored in studies like [33, 34, 44, 49], have been shown to encode semantic information. Recent research such as LD-ZNet [34] and ODISE [49] leverages intermediate features of these models for training networks for semantic segmentation. Localizing Prompt Mixing (LPM) [33], on the other hand, utilizes clustering on self-attention outputs for segment identification. Motivated by this success, our method leverages pre-trained intermediate features to achieve semantic segmentation and apply localized edits using edit instructions. 3. Background Latent Diffusion Models. Stable Diffusion (SD) [38] is a Latent Diffusion Model (LDM) designed to operate in a compressed latent space. This space is defined at the bottleneck of a pre-trained variational autoencoder (VAE) to enhance computational efficiency. Gaussian noise is introduced into the latent space, generating samples from a latent distribution zt. A U-Net-based denoising architecture [10] is then employed for image reconstruction, conditioned on noise input (zt) and text conditioning (cT ). This reconstruction is iteratively applied over multiple time steps, each involving a sequence of self-attention and cross-attention layers. Self-attention layers transform the current noised image representation, while cross-attention layers integrate text conditioning. Every attention layer comprises three components: Queries (Q), Keys (K), and Values (V ). For cross-attention layers, Qs are obtained by applying a linear transformation fQ to the result of the self-attention layer preceding the cross-attention layer (i.e., image features). Similarly, Ks and V s are derived from text conditioning cT using linear transformations fK and fV . Equation (1) shows the mathematical formulation of an attention layer where P denotes the attention maps and is obtained as the softmax of the dot product of K and Q normalized by the square root of dimension d of Ks and Qs. Attention(Q, K, V ) = P \u00b7 V, where P = Softmax \u0012QKT \u221a d \u0013 . (1) Intuitively, P denotes which areas of the input features will be modified in the attention layer. For cross-attention, this is the area of the image that is affected by one of the conditioning text tokens that define cT . Beyond these attention maps, our approach also leverages the output of transformer layers, noted as intermediate features \u03d5(zt), which contain rich semantic content, as highlighted in recent studies [34, 44, 49]. In this work, we modify the crossattention\u2019s P and leverage the intermediate features \u03d5(zt) to localize edits in pre-trained LDMs. InstructPix2Pix. Our method relies on InstructPix2Pix (IP2P) [6], an image-to-image transformation network trained for text-conditioned editing. IP2P builds on top of Stable Diffusion and incorporates a bi-conditional framework, which simultaneously leverages an input image I, and an accompanying text-based instruction T to steer the synthesis of the image, with the conditioning features being cI for the image and cT for the text. The image generation workflow is modulated through a classifier-free guidance (CFG) strategy [18] that employs two separate coefficients, sT for text condition and sI for image condition. The noise vectors predicted by the learned network e\u03b8, which corresponds to the individual U-Net step, with different sets of inputs, are linearly combined as represented in Eq. (2) to achieve score estimate \u02dc e\u03b8. Our method utilizes and modifies the processes for the terms with cI in Eq. (2) to apply localized image editing. \u02dc e\u03b8(zt, cI, cT ) = e\u03b8(zt, \u2205, \u2205) + sI \u00b7 (e\u03b8(zt, cI, \u2205) \u2212e\u03b8(zt, \u2205, \u2205)) + sT \u00b7 (e\u03b8(zt, cI, cT ) \u2212e\u03b8(zt, cI, \u2205)). (2) 4. Method We aim to develop a localized editing method for a pretrained IP2P without re-training or fine-tuning. The proposed method contains two components: (i) edit localization finds the RoI by incorporating the input image and the edit instruction, and (ii) edit application applies the instruction to RoI in a disentangled and localized manner. 4.1. Edit Localization Segmentation: Our study extends the established understanding that intermediate features of diffusion models encode essential semantic information. In contrast to previous methods that build upon Stable Diffusion [34, 44, 49], our approach works on IP2P and focuses on the features conditioned on the original image (zt, cI, and \u2205) for segmentation as indicated in Eq. (2). Through experimental observation, we show that these features align well with segmentation objectives for editing purposes. To obtain segmentation maps, we extract features from multiple layers of the U-Net architecture, including both downand up-blocks, to encompass a variety of resolutions and enhance the semantic understanding of the image. Our preference for intermediate features over attention maps is based on their superior capability to encode richer semantic information, as verified by studies such as [34, 44, 49]. We implement a multi-resolution fusion strategy to refine the feature representations within our proposed model. This involves (i) resizing feature maps from various resolutions to a common resolution by applying bi-linear interpolation, 3 \f(ii) concatenating and normalizing them along the channel dimension, and (iii) finally, applying a clustering method, such as the K-means algorithm, on fused features. We aim to retain each feature set\u2019s rich, descriptive qualities by following these steps. Moreover, each resolution in the U-Net step keeps different granularity of the regions in terms of semantics and sizes. Figure 2 demonstrates segmentation maps from different resolutions and our proposed fused features. Each resolution captures different semantic components of the image, e.g., field, racket, hat, dress.... Although Resolution 64 can distinguish objects, e.g., skin and outfit, it does not provide consistent segment areas, e.g., two distinct clusters for lines in the field. On the other hand, lower resolutions, Resolution 16 and 32, can capture coarse segments like lines in the field and the racket. Fusing those features from different resolutions yields more robust feature representations, enhancing the segmentation; see Fig. 2 Ours. For the extraction of intermediate features, we use time steps between 30 and 50 out of 100 steps, as recommended by LD-ZNet [34]. Input Resolution 16 Resolution 32 Resolution 64 Ours Attention RoI Instruction: Make her outfit black # of clusters: 8 Figure 2. Segmentation and RoI finding. Resolution Xs demonstrates segmentation maps from different resolutions, while Ours shows the segmentation map from our method. For the crossattention map, the color yellow indicates high probability, and blue dots mark the 100 pixels with the highest probability. The last image shows the extracted RoI using blue dots and Ours. Localization: Upon identifying the segments within the input image, the proposed method identifies the RoI for the edit using cross-attention maps conditioned on the input image and instruction (zt, cI, and cT ) as indicated in Eq. (2). These maps have dimensions of Hb \u00d7 Wb \u00d7 D, where Hb and Wb represent the height and width of the features of block bth (up and down blocks), respectively, and D denotes the number of text tokens. Following our segmentation strategy, the cross-attention maps are resized to a common resolution, combined among the spatial dimensions, namely H and W, and normalized among the token dimension, D. After merging attention maps from different resolutions, the method ignores the , stop words, and padding tokens to ignore noisy attention values from unrelated parts of the conditioning text and focuses on the remaining tokens to identify the area that is related to the edit instruction. Then, we get the mean attention score among the tokens to generate a final attention map; see Fig. 2 Attention. Subsequently, the top 100 pixels, ablated in Tab. 4, marked by highest probability scores, are identified. Then, all segments that overlap at least one of those pixels are combined to obtain the RoI; see Fig. 2 Ours, Attention, and RoI. 4.2. Edit Application Leveraging the strength of the pre-trained models, we introduce a novel localized editing technique within IP2P. This module manipulates attention scores corresponding to the RoI while ensuring the rest of the image remains the same, thus preventing any unintended alterations outside the RoI. Specifically, this procedure uses the terms with zt, cI, and cT using the notation of Eq. (2). Token-based cross attention probabilites Before After Figure 3. Attention Regularization. Our method selectively regularizes unrelated tokens within the RoI, ensuring precise, contextaware edits without the need for additional model training or extra data. After attention regularization, the probabilities for the related tokens are attending the RoI, as illustrated in the second row. Attention Regularization: Previous methods [2, 8, 29] use the noise space instead of attention scores. In contrast, our method introduces targeted attention regularization for selectively reducing the influence of unrelated tokens within the RoI during editing. This approach regularizes attention scores for tokens that are unrelated to the editing task, such as , padding, and stop words (denoted as S). By adjusting the attention scores (QKT ) within the RoI, we aim to minimize the impact of these unrelated tokens during the softmax normalization process. As a result, the softmax function is more likely to assign higher attention probabilities within the RoI to tokens that align with the editing instructions. This targeted approach ensures that edits are precisely focused on the desired areas, enhancing the accuracy and effectiveness of the edits while preserving the rest. Given the binary mask for RoI M, we modify the result of the dot product QKT of cross-attention layers for unrelevant tokens to a regularization version R(QKT , M) as follows: 4 \fInput Image IP2P [6] RoI + LIME Input Image IP2P [6] RoI + LIME (a) Make her outfit black. (b) Turn the real birds into origami birds. (c) Put blue glitter on fingernails. (d) Add a pond. Input Image IP2P [6] w/MB [52] RoI + LIME Input Image IP2P [6] w/MB [52] RoI + LIME (e) Replace the ground with a forest. (f) Remove mint leaves. Figure 4. Qualitative Examples. We test our method on different tasks: (a) editing a large segment, (b) altering texture, (c) editing multiple segments, (d) adding, (e) replacing, and (f) removing objects. Examples are taken from established papers [20, 52, 53]. The integration of LIME enhances the performance of all models, enabling localized edits while maintaining the integrity of the remaining image areas. R(QKT , M) = ( QKT ijt \u2212\u03b1, if Mij = 1 and t \u2208S QKT ijt, otherwise, (3) where \u03b1 is a large value. Intuitively, we prevent unrelated tokens from attending to the RoI, as shown in Fig. 3. In contrast, related tokens will be more likely to be selected in the RoI, leading to more accurate, localized, and focused edits. This method achieves an optimal balance between targeted editing within the intended areas and preserving the surrounding context, thus enhancing the overall effectiveness of the instruction. By employing this precise regularization technique within the RoI, our method significantly enhances IP2P. It elevates the degree of disentanglement and improves the localization of edits by tapping into the already-learned features of the model. This targeted approach circumvents the need for re-training or fine-tuning, preserving computational resources and time. It harnesses the inherent strength of the pre-trained IP2P features, deploying them in a focused and effective manner. This precision ensures that edits are contained within the intended areas, underpinning the model\u2019s improved capability to execute complex instructions in a localized and controlled way without the necessity for additional rounds of training or fine-tuning. 5. Experiments 5.1. Evaluation Datasets and Metrics Combining diverse datasets and metrics ensures a thorough evaluation of our proposed method. For each dataset, we report the metrics proposed in the corresponding work. MagicBrush [52]. The test split offers a comprehensive evaluation pipeline with 535 sessions and 1053 turns. Sessions refer to the source images used for iterative editing instructions, and turns denote the individual editing steps within each session. It employs L1 and L2 norms to measure pixel accuracy, CLIP-I, and DINO embeddings for assessing image quality via cosine similarity, and CLIP-T to ensure that the generated images align accurately with local textual descriptions. InstructPix2Pix [6]. We evaluate our method on InstructPix2Pix test split with 5K image-instruction pairs. Metrics include CLIP image similarity for visual fidelity and CLIP text-image direction similarity to measure adherence to the editing instructions. PIE-Bench [20]. The benchmark includes 700 images in 10 editing categories with input/output captions, editing instructions, input images, and RoI annotations. Metrics for structural integrity and background preservation are derived from cosine similarity measures and image metrics like PSNR, LPIPS, MSE, and SSIM, while text-image consistency is evaluated via CLIP Similarity. 5 \fMethods Single-turn Multi-turn MB L1 \u2193 L2 \u2193 CLIP-I \u2191 DINO \u2191 CLIP-T \u2191 L1 \u2193 L2 \u2193 CLIP-I \u2191 DINO \u2191 CLIP-T \u2191 Open-Edit [25] \u2717 0.143 0.043 0.838 0.763 0.261 0.166 0.055 0.804 0.684 0.253 VQGAN-CLIP [9] \u2717 0.220 0.083 0.675 0.495 0.388 0.247 0.103 0.661 0.459 0.385 SDEdit [27] \u2717 0.101 0.028 0.853 0.773 0.278 0.162 0.060 0.793 0.621 0.269 Text2LIVE [4] \u2717 0.064 0.017 0.924 0.881 0.242 0.099 0.028 0.880 0.793 0.272 Null-Text Inv. [30] \u2717 0.075 0.020 0.883 0.821 0.274 0.106 0.034 0.847 0.753 0.271 HIVE [53] \u2717 0.109 0.034 0.852 0.750 0.275 0.152 0.056 0.800 0.646 0.267 HIVE [53] + LIME \u2717 0.051 0.016 0.940 0.909 0.293 0.080 0.029 0.894 0.829 0.283 HIVE [53] \u2713 0.066 0.022 0.919 0.866 0.281 0.097 0.037 0.879 0.789 0.280 HIVE [53] + LIME \u2713 0.053 0.016 0.939 0.906 0.300 0.080 0.028 0.899 0.829 0.295 IP2P [6] \u2717 0.112 0.037 0.852 0.743 0.276 0.158 0.060 0.792 0.618 0.273 IP2P [6] + LIME \u2717 0.058 0.017 0.935 0.906 0.293 0.094 0.033 0.883 0.817 0.284 IP2P [6] \u2713 0.063 0.020 0.933 0.899 0.278 0.096 0.035 0.892 0.827 0.275 IP2P [6] + LIME \u2713 0.056 0.017 0.939 0.911 0.297 0.088 0.030 0.894 0.835 0.294 Table 1. Evaluation on MagicBrush Dataset [52]. Results for single-turn and multi-turn settings are presented for each method and MB stands for models fine-tuned on MagicBrush. The benchmark values for other approaches are sourced from [52], while values for our proposed method are computed following the same protocol. Across both settings, our method surpasses the base models performance of the compared models. The top-performing is highlighted in bold, while the second-best is denoted with underline for each block. EditVal [5]. The benchmark offers 648 image editing operations spanning 19 classes from the MS-COCO dataset [24]. The benchmark assesses the success of each edit with a binary score that indicates whether the edit type was successfully applied. The OwL-ViT [28] model is utilized to detect the object of interest, and detection is used to assess the correctness of the modifications. 5.2. Implementation Details Our method adopts InstructPix2Pix [6] as its base model and runs the model for 100 steps for each stage explained in Secs. 4.1 and 4.2. Specifically, during Edit Localization, intermediate representations are extracted between 30 and 50 out of 100 steps, as suggested in LD-ZNet [34]. Moreover, those intermediate features are resized to 256 \u00d7 256. The number of clusters for segmenting is 8 across all experiments, motivated by an ablation study. Concurrently, we gather features from steps 1 to 75 for the cross-attention maps and retain only related tokens. We extract 100 pixels with the highest probabilities from the attention maps to identify RoI and determine overlapping segments. For Edit Localization, the image scale sI and the text scale sT , the parameters are 1.5 and 7.5, respectively. During Edit Application, the attention regularization is employed between steps 1 and 75, targeting only unrelated tokens. Throughout the editing process, the image scale, sI, and the text scale, sT , parameters are set to 1.5 and 3.5, respectively. 5.3. Qualitative Results Figure 4 presents qualitative examples for various editing tasks. These tasks include editing large segments, altering textures, editing multiple small segments simultaneously, and adding, replacing, or removing objects. The first column displays the input images, with the corresponding edit instructions below each image. The second column illustrates the results generated by the base models without our proposed method. The third and fourth columns report the RoI identified by our method and the edited output produced by the base models when our regularization method is applied to these RoIs. As shown in Fig. 4, our method effectively implements the edit instructions while preserving the overall scene context. In all presented results, our method surpasses current state-of-the-art models, including their fine-tuned versions on manually annotated datasets, e.g., MagicBrush [52]. Furthermore, as also claimed and reported in HIVE [53], without additional training, IP2P cannot perform a successful edit for (d) in Fig. 4. However, our proposed method achieves the desired edit without any additional training on the base model as shown Fig. 4 (d). 5.4. Quantitative Results Results on MagicBrush. Our method outperforms all other methods on both the singleand multi-turn editing tasks on MagicBrush (MB) [52] benchmark, as seen in Tab. 1. Compared to the base models, our approach provides significant improvements and best results in terms of L1, L2, CLIP-I, and DINO. For the CLIP-T metric, which compares the edited image and caption to the ground truth, our method comes very close to the oracle scores of 0.309 for multi-turn and 0.307 for single-turn. This indicates that our edits accurately reflect the ground truth modifications. VQGAN-CLIP [9] achieves the highest in CLIP-T 6 \fMetrics Structure Background Preservation CLIP Similarity Methods GT Mask Distance\u00d7103 \u2193 PSNR \u2191 LPIPS\u00d7103 \u2193 MSE\u00d7104 \u2193 SSIM\u00d7102 \u2191 Whole \u2191 Edited \u2191 InstructDiffusion [15] \u2717 75.44 20.28 155.66 349.66 75.53 23.26 21.34 BlendedDiffusion [3] \u2713 81.42 29.13 36.61 19.16 86.96 25.72 23.56 DirectInversion + P2P [20] \u2717 11.65 27.22 54.55 32.86 84.76 25.02 22.10 IP2P [6] \u2717 57.91 20.82 158.63 227.78 76.26 23.61 21.64 IP2P [6] + LIME \u2717 32.80 21.36 110.69 159.93 80.20 23.73 21.11 IP2P [6] + LIME \u2713 26.33 24.78 89.90 105.19 82.26 23.81 21.10 IP2P [6] w/MB [52] \u2717 22.25 27.68 47.61 40.03 85.82 23.83 21.26 IP2P [6] w/MB [52] + LIME \u2717 10.81 28.80 41.08 27.80 86.51 23.54 20.90 IP2P [6] w/MB [52] + LIME \u2713 10.23 28.96 39.85 27.11 86.72 24.02 21.09 HIVE [53] \u2717 56.37 21.76 142.97 159.10 76.73 23.30 21.52 HIVE [53] + LIME \u2717 37.05 22.90 112.99 107.17 78.67 23.41 21.12 HIVE [53] + LIME \u2713 33.76 24.14 103.63 94.01 81.18 23.62 21.21 HIVE [53] w/MB [52] \u2717 34.91 20.85 158.12 227.18 76.47 23.90 21.75 HIVE [53] w/MB [52] + LIME \u2717 26.98 26.09 68.28 63.70 84.58 23.96 21.36 HIVE [53] w/MB [52] + LIME \u2713 25.86 28.43 50.33 43.25 86.67 24.23 21.43 Table 2. Evaluation on PIE-Bench Dataset [20]. Comparison across ten edit types shows our method outperforming base models on textguided image editing models. The numbers for the first block are taken from the benchmark paper [20]. The top-performing is highlighted in bold, while the second-best is denoted with underline for each block. GT Mask stands for ground-truth masks as regions of interest. by directly using CLIP [35] for fine-tuning during inference. However, this can excessively alter images, leading to poorer performance in other metrics. Overall, the performance across metrics shows that our approach generates high-quality and localized image edits based on instructions, outperforming prior state-of-the-art methods. Results on PIE-Bench. Our quantitative analysis on the PIE-Bench [20] demonstrates the effectiveness of our proposed method. Compared to baseline models like InstructPix2Pix [6] and fine-tuned versions on MagicBrush [52] and HIVE [53], our method achieves significantly better performance on metrics measuring structure and background preservation. This indicates that our approach makes localized edits according to the instructions while avoiding unintended changes to unaffected regions. At the same time, our method obtains comparable results to base models on the CLIP similarity score, showing it applies edits faithfully based on the textual instruction. A comprehensive comparison is presented in Tab. 2. Overall, the quantitative results validate that our method can enable textguided image editing by making precise edits solely based on the given instruction without altering unrelated parts. Results on EditVal. In evaluation using the EditVal benchmark dataset [5], our method exhibits superior performance across various edit types, particularly excelling in Object Addition (O.A.), Position Replacement (P.R.), and Positional Addition (P.A.), while achieving second-best in Object Replacement (O.R.). In particular, it performs comparable to other methods for edits involving Size (S.) and Alter Parts (A.P.). A comprehensive comparison is presented in Tab. 3. Overall, the method advances the state-of-the-art by improving the average benchmark results by a margin of 5% over the previous best model. Method O.A. O.R. P.R. P.A. S. A.P. Avg. SINE [54] 0.47 0.59 0.02 0.16 0.46 0.30 0.33 NText. [30] 0.35 0.48 0.00 0.20 0.52 0.34 0.32 IP2P [6] 0.38 0.39 0.07 0.25 0.51 0.25 0.31 Imagic [21] 0.36 0.49 0.03 0.08 0.49 0.21 0.28 SDEdit [27] 0.35 0.06 0.04 0.18 0.47 0.33 0.24 DBooth [39] 0.39 0.32 0.11 0.08 0.28 0.22 0.24 TInv. [14] 0.43 0.19 0.00 0.00 0.00 0.21 0.14 DiffEdit [8] 0.34 0.26 0.00 0.00 0.00 0.07 0.11 IP2P + LIME 0.48 0.49 0.21 0.34 0.49 0.28 0.38 Table 3. Evaluation on EditVal Dataset [5]. Comparison across six edit types shows our method outperforming eight state-of-theart text-guided image editing models. The numbers for other methods are directly taken from the benchmark paper [5] and the same evaluation setup is applied to our method. The top-performing is highlighted in bold, while the second-best is denoted with underline for each block. Results on InstructPix2Pix. We evaluate our method utilizing the same setup as InstructPix2Pix, presenting results on a synthetic evaluation dataset [6] as shown in Fig. 5. Our approach notably improves the base model, IP2P, optimizing the trade-off between the input image and the instruction-based edit. Additionally, while an increase in text scale, sT , enhances the CLIP Text-Image Direction Similarity, it adversely impacts CLIP Image Similarity. For both metrics, the higher, the better. The black arrow indicates the selected configuration for the results in this paper. 7 \f0.05 0.10 0.15 CLIP T ext-Image Direction Similarity 0.75 0.85 0.95 CLIP Image Similarity Ours (T=5.5, C=8) Ours (T=5.5, C=16) Ours (T=3.5, C=8) Ours (T=3.5, C=16) InstructPix2Pix Figure 5. InstructPix2Pix Test. Trade-off between input image (Y-axis) and edit (X-axis) is showed. T and C denotes sT and # of clusters, respectively. For all experiments, sI \u2208[1.0, 2.2] is fixed. The arrow points to the chosen configuration for our results. 5.5. Ablation Study Ablation studies analyze the impact of three key components: the RoI finding method, the number of points from attention maps, and the number of clusters. InstructPix2Pix is the base architecture. Evaluation is on the MagicBrush dataset. Each parameter is modified separately, while other parameters are kept fixed to isolate their impact. RoI finding methods. The ground truth masks of MagicBrush [52] are not very tight around the edit area, see Supplementary Material for visualizations. For this reason, our method with predicted masks achieves the best performance for the L1, L2, CLIP-I, and DINO metrics while having onpar results with CLIP-T compared to the use of ground truth masks, as shown in Tab. 4. We also compare the segmentation predicted by adapting the state-of-the-art LPM [33] to IP2P by utilizing the official code base1. Even in this case, our method achieves better results. Number of points from attention maps. Using only 25 points worsens performance, as it cannot capture multiple distinct segments within RoI. However, having more points includes excessive noise, causing more segments to improperly merge and expanding the RoI area. 100 points provide better RoI, as shown in Tab. 4. Number of clusters. A few clusters like 4 led to large segments and an expanded RoI, preventing localized edits. Increasing the number of clusters, like 16 or 32, causes the separation of a single RoI into multiple clusters. As shown in Tab. 4, 8 achieves the best results. Edit Application. Instead of attention regularization, editing can also be performed in noise space [2, 8, 29]. This 1https://github.com/orpatashnik/local-prompt-mixing Method L1 \u2193 L2 \u2193 CLIP-I \u2191 DINO \u2191 CLIP-T \u2191 IP2P [6] 0.112 0.037 0.852 0.743 0.276 Mask GT 0.063 0.017 0.935 0.902 0.297 LPM [33] 0.072 0.019 0.924 0.886 0.291 Ours 0.058 0.017 0.935 0.906 0.293 # Points 25 0.079 0.023 0.917 0.874 0.290 100 0.058 0.017 0.935 0.906 0.293 225 0.065 0.018 0.932 0.901 0.295 400 0.070 0.020 0.925 0.889 0.295 # Clusters 4 0.080 0.022 0.923 0.885 0.295 8 0.058 0.017 0.935 0.906 0.293 16 0.062 0.018 0.933 0.903 0.294 32 0.064 0.018 0.932 0.901 0.291 Edit Noise 0.076 0.022 0.914 0.864 0.291 Ours 0.058 0.017 0.935 0.906 0.293 Table 4. Ablation Study. For fair comparison, all parameters are the same for all settings except the ablated parameter. The top-performing is highlighted in bold, while the second-best is denoted with underline for each block. corresponds to a linear blending of the input image and a reference image derived from the edit text in noise space, according to the RoI. However, alignment between the reference and input images in the edited area is crucial for targeting the RoI effectively. As shown in Tab. 4 Edit, our method enhances editing precision by employing attention regularization. 6. Conclusion In this paper, we introduce, LIME, a novel localized image editing technique using IP2P modified with explicit segmentation of the edit area and attention regularization. This approach effectively addresses the challenges of precision and context preservation in localized editing, eliminating the need for user input or model fine-tuning/retraining. The attention regularization step of our method can also be utilized with a user-specified mask, offering additional flexibility. Our method\u2019s robustness and effectiveness are validated through empirical evaluations, outperforming existing state-of-the-art methods. This advancement contributes to the continuous evolution of LDMs in image editing, pointing toward exciting possibilities for future research. Input IP2P [6] Ours Input . . . it . . . . . . zebra . . . Color the tie blue. Make \u27e8word \u27e9drink from a bucket. Figure 6. Failure Cases & Limitations. Left: Base model entanglement. Right: Feature mixing issue. 8 \fLimitations. Figure 6 shows limitations of our method: (i) shows the limitation due to the pre-trained base model\u2019s capabilities. Our method can focus on the RoI and successfully apply edits but may alter the scene\u2019s style, particularly in color, due to the base model entanglement. However, our proposal significantly improves the edit compared to IP2P. (ii) illustrates how prompt content impacts edit quality. During editing, all tokens except , stop words, and padding, affect the RoI, leading to feature mixing." + }, + { + "url": "http://arxiv.org/abs/2403.00175v2", + "title": "FusionVision: A comprehensive approach of 3D object reconstruction and segmentation from RGB-D cameras using YOLO and fast segment anything", + "abstract": "In the realm of computer vision, the integration of advanced techniques into\nthe processing of RGB-D camera inputs poses a significant challenge, given the\ninherent complexities arising from diverse environmental conditions and varying\nobject appearances. Therefore, this paper introduces FusionVision, an\nexhaustive pipeline adapted for the robust 3D segmentation of objects in RGB-D\nimagery. Traditional computer vision systems face limitations in simultaneously\ncapturing precise object boundaries and achieving high-precision object\ndetection on depth map as they are mainly proposed for RGB cameras. To address\nthis challenge, FusionVision adopts an integrated approach by merging\nstate-of-the-art object detection techniques, with advanced instance\nsegmentation methods. The integration of these components enables a holistic\n(unified analysis of information obtained from both color \\textit{RGB} and\ndepth \\textit{D} channels) interpretation of RGB-D data, facilitating the\nextraction of comprehensive and accurate object information. The proposed\nFusionVision pipeline employs YOLO for identifying objects within the RGB image\ndomain. Subsequently, FastSAM, an innovative semantic segmentation model, is\napplied to delineate object boundaries, yielding refined segmentation masks.\nThe synergy between these components and their integration into 3D scene\nunderstanding ensures a cohesive fusion of object detection and segmentation,\nenhancing overall precision in 3D object segmentation. The code and pre-trained\nmodels are publicly available at https://github.com/safouaneelg/FusionVision/.", + "authors": "Safouane El Ghazouali, Youssef Mhirit, Ali Oukhrid, Umberto Michelucci, Hichem Nouira", + "published": "2024-02-29", + "updated": "2024-05-01", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.AI" + ], + "label": "Original Paper", + "paper_cat": "Semantic AND Segmentation AND Image", + "gt": "FusionVision: A comprehensive approach of 3D object reconstruction and segmentation from RGB-D cameras using YOLO and fast segment anything", + "main_content": "Introduction The significance of point-cloud processing has surged across various domains such as robotics [1, 2], medical field [3, 4], autonomous driving [5, 6], metrology [7, 8, 9], etc. Over the past few years, advancements in vision arXiv:2403.00175v2 [cs.CV] 1 May 2024 \fsensors have led to remarkable improvements, enabling these sensors to provide real-time 3D measurements of the surroundings while maintaining decent accuracy [10, 11]. Consequently, point-cloud processing forms an essential pivot of numerous application by facilitating robust object detection, segmentation and classification operations. Within the field of computer vision, two extensively researched pillars stand prominent: object detection and object segmentation. These sub-fields have captivated the research community for the past decades, helping computers understand and interact with visual data [12, 13, 14]. Object detection involves identifying and localizing one or multiple objects in an image or a video stream, often employing advanced deep learning techniques such as Convolutional Neural Networks (CNNs) [15] and Region-based CNNs (R-CNNs) [16]. The pursuit of real-time performance has led to the development of more efficient models such as Single Shot MultiBox Detector (SSD) [17] and You Only Look Once (YOLO) [18], which demonstated a balanced performance between accuracy and speed. On the other hand, object segmentation goes beyond the detection process allowing delineating the precise boundaries of each identified object [19]. The segmentation process enables a finer understanding of the visual scene and a precise object localization in the given image. In the literature, two segmentation types are differentiated: semantic segmentation assigns a class label to each pixel [20], while instance segmentation distinguishes between individual instances of the same class [21]. One of the most popular object detection models is (YOLO). The latest known version of YOLO is YOLOv8 which is a real-time object detection system that uses a single neural network to predict bounding boxes and class probabilities simultaneously [22, 23]. It is designed to be fast and accurate, making it suitable for applications such as autonomous vehicles and security systems. YOLO works by dividing the input image into a grid of cells, each one predicts a fixed number of bounding boxes, which are then filtered using a defined confidence threshold. The remaining bounding boxes are then resized and repositioned to fit the object they are predicting. The final step is to perform non-maximum suppression [24] on the remaining bounding boxes to remove overlapping predictions. The loss function used by YOLO is a combination of two terms: the localization loss and the confidence loss. The localization loss measures the difference between the predicted bounding box coordinates and the ground truth coordinates, while the confidence loss measures the difference between the predicted class probability and the ground truth class. SAM [25], on the other hand, is a recent popular deep learning model for image segmentation tasks. It is based on the U-Net architecture commonly selected for medical applications [26, 27, 28]. U-Net is a CNN that is specifically designed for image segmentation, it consists of an encoder and a decoder, which are connected by a skip connection [29]. The encoder is responsible for extracting features from the input image, while the decoder handles the generation of the segmentation mask. The skip connection allows the model to use the features learned by the encoder at different levels of abstraction, which helps in generating more accurate segmentation masks. SAM gained its popularity because it achieves state-of-the-art performance on various image segmentation benchmarks and many fields such as medical [30], and additional known dataset such as the PASCAL VOC 2012 [31]. It is particularly effective in segmenting complex objects, such as buildings, roads, and vehicles, which are common in urban environments. The model\u2019s ability to generalize across different datasets and tasks has highly contributed to its popularity. The use of YOLO and SAM is still extensively studied and field-applied by the scientific community for 2D computer vision tasks [32, 33, 34]. However, in this paper, we focus the study on the involvement possibility of both state-of-the-art algorithms on RGB-D images. RGB-D cameras are depth sensing cameras that capture both RGB-channel (Red, Green, Blue) and D-map (depth information) of a scene (example shown in Figure 1). These cameras use infrared (IR) projectors and sensors to measure the distance of objects from the camera, providing an additional depth dimension to the RGB image with sufficient accuracy. For example, according to F. Pan et al. [35], an estimated accuracy of 0.61\u00b10.42 mm has been assessed on RGB-D camera for facial scanning. Compared to traditional RGB cameras, RGB-D cameras offer several advantages, including: (1) Improved object detection and tracking [36]: The depth information provided by RGB-D cameras allows for more accurate object detection and tracking, even in complex environments with occlusions and varying lighting conditions. (2) 3D reconstruction [37, 38]: RGB-D cameras can be used to create 3D models of objects and environments, enabling applications such as augmented reality (AR) and virtual reality (VR). (3) Human-computer interaction [39, 40]: The depth information provided by RGB-D cameras can be used to detect and track human movements, allowing more natural and intuitive human-computer interaction. 2 \fRGB-D cameras have a wide range of applications, including robotics, computer vision, gaming, and healthcare. In robotics, RGB-D cameras are used for object manipulation [41], navigation [42], and mapping [43]. In computer vision, they are used for 3D reconstruction [37], object recognition and tracking [44, 45]. All those algorithm take advantage of the depth information to work with 3D data instead of images. The point-cloud processing allows additional accuracy for the object tracking leading to improved knowledge about its position, orientation, and dimensions in 3D space. This offer distinct advantages compared to traditional image-based systems. Furthermore, RGB-D technologies are also able to surpass diverse lighting conditions [46] due to the use of IR lighting. computer cup Figure 1: Example of RGB-D camera scene capturing and 3D reconstruction: (a) scene 3D reconstruction from RGB-D depth-channel. (b) RGB stream capture from RGB sensor. (c) Visual estimation of depth with the ColorMap JET (the closer object are represented in green and far ones are the dark blue regions) This paper presents a contribution in the fields of RGB-D and object detection and segmentation. The primary contribution lies in the development and application of FusionVision, a method that links models originally proposed for 2D images, with RGB-D types of data. Specifically, two known models have been implemented, validated and adjusted to work with RGB-D data through the use of both the Depth and RGB channels of an Intel Realsense camera. This combination has led to an enhancement in understanding scenes resulting in 3D object isolation and reconstructions without distortions or noises. Moreover, point-cloud post-processing techniques, including denoising and downsampling, have been integrated to remove anomalies and distortions caused by reflectivity or inaccurate depth measurements, as to improve the real-time performance of the proposed FusionVision pipeline. The rest of the paper is organized as follows: Despite the uniqueness of the proposed pipeline and the scarcity of methods similar to the one proposed in this paper, few related works are discussed in Section 2. A detailed and comprehensive description of the FusionVision pipeline is given in Section 3 where the processes are discussed step-by-step. Following this, the implementation of the framework and results are presented and discussed in Section 4. Finally, the paper finds are summarized in Section 5. 2 Related work The aforementioned YOLO and SAM models have been mainly proposed for 2D computer vision operations, lacking the adaptability for RGB-D images. The 3D detection and segmentation of the objects is therefore beyond their capabilities leading to a need for 3D object detection methods. Within this context, few methods have been studied for 3D object detection and segmentation from RGB-D Cameras. Tan Z. et al. [47] proposed an improved YOLO (version 3) for 3D object localization. The method aims to achieve real-time high-accuracy 3D object detection from point-clouds using a single RGB-D camera. The authors propose a network system that combines both 2D and 3D object detection algorithms to improve real-time object detection results and increase speed. The used combination of two state-of-the-art object detection methods are: [48] performing 3 \fobject detection from RGB sensor, and Frustum PointNet [49], a real-time method that uses frustum constraints to predict a 3D bounding box of an object. The method framework could be summarized as follows (Figure 2): 2D Object detection Frustum Generation PointNet 3D Reconstruction Encode Decoder + Figure 2: Complex YOLO framework for 3D object reconstruction and localization [47] (1) The system starts by obtaining 3D point-clouds from a single RGB-D camera along with the RGB stream. (2) The 2D object detection algorithm is used to detect and localize objects in the RGB images. This provides useful prior information about the objects, including their position, width, and height. (3) The information from the 2D object detection is then used to generate 3D frustums. A frustum is a pyramid-shaped volume that represents the possible location of an object in 3D space based on its 2D bounding box. (4) The generated frustums are fed into the PointNets algorithm, which performs instance segmentation and predicts the 3D bounding box of each object within the frustum. By combining the results from both the 2D and 3D object detection algorithms, the system achieves realtime object detection performance, both indoors and outdoors. For the method evaluation, the author stated achieving real-time 3D object detection using an Intel realsense D435i RGB-D camera with the algorithm running on a GTX 1080 ti GPU-based system. However this proposed method has limitations and is subject to noise usually due to bad estimation of depth and object reflectivity. 3 FusionVision Pipeline The implemented FusionVision pipeline could be summarized in six steps in addition to the first step of data acquisition (Figure 3): (1) Data acquisition & Annotation: This initial phase involves obtaining images suitable for training the object detection model. This image collection can include singleor multi-class scenarios. As part of preparing the acquired data, splitting into separate subsets designated for training and testing purposes is required. If the object of interest is within the 80 classes of Microsoft COCO (Common Objects in Context) dataset [50], this step may be optional, allowing the utilization of existing pre-trained models. Otherwise, if the special object is to be detected, or object shape is uncommon or different from the ones in the datasets, this step is required. (2) YOLO model training: Following data acquisition, the YOLO model undergoes training to enhance its ability to detect specific objects. This process involves optimizing the model\u2019s parameters based on the acquired dataset. 4 \fYOLO training Apply inference Realsense Live stream YES Wait next frame NO FastSAM Apply on depth map RGB and Depth extrinsic matching process input/output Decision Display display 3D object Dataset acquisition Annotation Segmentation mask extraction Bounding boxes extraction Model weights saving Detected? (1) (2) (3) (4) (5) (6) Figure 3: Proposed Pipeline for Real-Time 3D Object Segmentation Using Fused YOLO and FastSAM Applied on RGB-D Sensor. (3) Apply model inference: Upon successful training, the YOLO model is deployed on the live stream of the RGB sensor from the RGB-D camera to detect objects in real-time. This step involves applying the trained model to identify objects within the camera\u2019s field of view. (4) FastSAM application: If any object is detected in the RGB stream, the estimated bounding boxes serve as input for the FastSAM algorithm, facilitating the extraction of object masks. This step refines the object segmentation process by leveraging FastSAM\u2019s capabilities. (5) RGB and Depth matching: The estimated mask generated from the RGB sensor is aligned with the depth map of the RGB-D camera. This alignment is achieved through the utilization of known intrinsic and extrinsic matrices, enhancing the accuracy of subsequent 3D object localization. (6) Application of 3D reconstruction from depth map: Leveraging the aligned mask and depth information, a 3D point-cloud is generated to facilitate the real-time localization and reconstruction of the detected object in three dimensions. This final step results in an isolated representation of the object in the 3D space. 3.1 Data Acquisition For applications requiring the detection of specific objects, the data acquisition consists of collecting a number of images using the camera of the specific object at different angles, positions and varying lighting conditions. The images need to be annotated afterwards with the bounding boxes corresponding to the location of the object within the image. Several annotators could be used for this step such as Roboflow [51] LabelImg [52] or VGG Image Annotator [53]. 3.2 YOLO training Training the YOLO model for robust object detection forms a strong backbone of FusionVision pipeline. The acquired data is split into 80% for training and 20% for validation. To further enhance the model\u2019s generalization capabilities, data augmentation techniques were employed by horizontally and vertically flipping images, as well as applying slight angle tilts [54]. In the context of object detection using YOLO, several key loss functions are used in training the model to accurately localize and classify objects within an image. The Objectness Loss (OL), defined by the Eq. (1), employs binary cross-entropy to assess the model\u2019s ability to predict the presence or absence of an object in a given grid cell, where yi represents the ground truth objectness label for a given grid cell in the image. The Classification Loss (CLSL), as outlined in Eq. (2), utilizes cross-entropy to penalize errors in predicting the class labels of detected objects across all classes (C the class number). To refine the localization 5 \faccuracy, the Bounding Box Loss (BboxL), described in Eq. (3), leverages mean squared error to measure the disparity between predicted \u02c6 yi and ground truth yi bounding box coordinates. Where cx, cy refer to the center coordinates of the bounding box and w, h are its width and height. Additionally, the Center Coordinates Loss (CL), detailed in Eq. (4), incorporates focal loss, including parameters \u03b1 and \u03b3, to address the imbalance in predicting the center coordinates of objects. These loss functions collectively guide the optimization process during training, steering the YOLOv8 model towards robust and precise object detection performance across diverse scenarios. OL = \u2212(yi \u00b7 log(\u02c6 yi) + (1 \u2212yi) \u00b7 log(1 \u2212\u02c6 yi)) (1) CLSL = \u2212 C X c=1 yi,c \u00b7 log(\u02c6 yi,c) (2) BboxL = X p\u2208{cx, cy, w, h} (yi,p \u2212\u02c6 yi,p)2 (3) CL = \u2212\u03b1 \u00b7 (1 \u2212\u02c6 yi,center)\u03b3 \u00b7 yi,center \u00b7 log(\u02c6 yi,center) (4) Throughout the training process, images and their corresponding annotations are fed into the YOLO network [22]. The network, in turn, generates predictions for bounding boxes, class probabilities, and confidence scores. These predictions are then compared to the ground-truth data using the aforementioned loss functions. This iterative process progressively improves the model\u2019s object detection accuracy until reaching a minimal value of total loss. 3.3 FastSAM deployment Once the YOLO model is trained, its bounding boxes serve as input for the subsequent step involving the FastSAM model. When processing the complete image, FastSAM estimate instance segmentation mask for all the viewed objects. Therefore, instead of processing the entire image, the YOLO estimated bounding box are used as input information to focus the attention on the relevant region where the object is, significantly reducing computational overhead. Its Transformer-based architecture then delves into this cropped image patch to generate a pixel-wise mask. 3.4 RGB and Depth matching RGB-D imaging devices typically incorporate an RGB sensor, responsible for capturing traditional 2D color images, and a depth sensor integrating left and right cameras alongside an infrared (IR) projector positioned in the middle. The project IR patterns onto the physical object are distorted by its shape, then get captured by the left and right cameras. Afterwards, the disparity information between corresponding points in the two images is used to estimate the depth of each pixel in the scene. The extracted segments resulting as an output of the FastSAM are represented through binary masks in the RGB channel of the cameras. The identification of the physical object in the DS is carried out by aligning both binary masks and depth frames (Figure 4). Within this alignment process, the transformation between the coordinate systems of the RGB camera and the depth sensor needs to be estimated either using the calibration process or based on the default factory values. Few calibration techniques can be used for the improvement of the matrices estimation such as [55, 56] This transformation is represented mathematically in Eq. (5). Z0 \"u0 v0 1 # = KcTcdK\u22121 d \uf8ee \uf8ef \uf8f0 Z u v 1 \uf8f9 \uf8fa \uf8fb (5) Where: \u2022 Z0, u0, v0 represent the depth value and pixel coordinates in the aligned depth image, \u2022 Z, u, v is the depth value and pixel coordinates in the original depth image 6 \f3D object DS RGB DS coord. syst. RGB coord. syst. RGB image Depth frame Tcd . \u01d9 \u01da . \u01d90 \u01da0 Binary Mask Intel Realsense D435i Figure 4: Visual representation of RGB camera alignment with the depth sensor \u2022 Kc is the RGB camera intrinsic matrices \u2022 Kd is the DS intrinsic matrices \u2022 Tcd represents the rigid transformation between RGB and DS. 3.5 3D Reconstruction of the physical Object Once FastSAM mask is aligned with the depth map, the identified physical objects could be reconstructed in 3D coordinates, taking into account only the region of interest (ROI). This process involves several key steps, including: (1) downsampling, (2) denoising, and (3) generating the 3D bounding boxes for each identified object in the point-cloud. The downsampling process is applied to the original point-cloud data allows the reduction of computational complexity while retaining essential object information. The selected downsampling technique involves voxelization, where the point-cloud is divided into regular voxel grids, and only one point per voxel is retained [57]. Following downsampling, a denoising procedure based on statistical outliers removal [58] is implemented to enhance the quality of the generated point-cloud. Outliers, which may arise from sensor noise are identified and removed from the point-cloud. Finally, for each physical object detected in the aligned FastSAM mask, a 3D bounding box is generated within the denoised point-cloud. The bounding box generation involves creating a set of lines connecting the min and max coordinates along each axis. This set of lines is aligned with the object\u2019s position in the denoised point-cloud. The resulting bounding box provides a spatial representation of the detected object in 3D. 4 Results and discussion 4.1 Setup configuration For the experimental study, the proposed framework is tested on the detection of 3 commonly used physical objects: cup, computer and bottle. The setup configuration that has been used is summarized in Table 1. Table 1: Setup configuration for realtime FusionVision pipeline Name Version Description Linux 22.04 LTS Operating system Python 3.10 Baseline programming language Camera D435i Intel RealSense RGB-D camera GPU RTX 2080 TI GPU for data parallelization OpenCV 3.10 Open source Framework for computer vision operations CUDA 11.2 Platform for GPU based processing 7 \f4.2 Data acquisition and annotation For the data acquisition step a total of 100 images featuring common objects, namely a cup, computer, and bottle, were captured using the RGB stream of a RealSense camera. The recorded images include several poses of the selected 3D physical objects and lighting conditions, as to ensure robust and comprehensive dataset for the model training. The images were annotated using the Roboflow annotator for the YOLO object detection model. Additionally, data augmentation techniques were then applied to enrich the dataset, involving horizontal and vertical flipping, as well as angle tilting (Figure 5). original images augmented images Figure 5: Example of acquired images for YOLO training: the top two images are original, the bottom ones are augmented images 4.3 YOLO training and FastSAM deployment 4.3.1 Quantitative analysis A comprehensive evaluation of the trained object detection YOLO model has been conducted to assess the robustness and generalization capabilities across diverse environmental conditions. The evaluation process involves three distinct sets of images. Each set contains between 20 and 30 images, designed to represent different scenarios encountered in real-world deployment. (1) The first set of images comprises similar environmental and lighting conditions to those used during model training. These images serve as a baseline for assessing the model\u2019s performance in familiar settings and providing insights into its ability to handle variations within its training domain. (2) The second set of images introduces variability in object positions, orientations and lighting conditions compared to the training data. By capturing a broader range of scenarios, this set enables to evaluate the model\u2019s adaptability to changes in object positions, orientations and lighting, while simulating real-world challenges such as occlusions and shadows. (3) The third set of images presents a more significant departure from the training data by incorporating entirely different backgrounds, surfaces and lighting conditions. This set aims to test the model\u2019s generalization capabilities beyond its training domain, such as to assess its ability to detect objects accurately in novel environments with diverse visual characteristics. 8 \fTable 2 presents a comprehensive analysis of the YOLO model\u2019s performance in terms of Intersection over Union (IoU) and precision metrics across the different test subsets. Table 2: Summary of YOLO\u2019s performance in bounding box estimation compared to ground truth annotated 3 test subsets Metrics Test sets cup bottle computer overall IoU 1 0.96 0.96 0.95 0.95 2 0.93 0.90 0.91 0.92 3 0.83 0.52 0.72 0.70 Precision 1 0.99 0.96 0.98 0.98 2 0.91 0.77 0.85 0.87 3 0.6 0.31 0.54 0.49 Across these scenarios, the \"cup\" class consistently demonstrates superior performance, achieving high IoU and Precision scores across all test sets (0.96 and 0.99 for IoU and Precision, respectively). This performance suggests robustness in the model\u2019s ability to accurately localize and classify instances of cups, regardless of environmental factors or object configurations. Conversely, the \"bottle\" class exhibits the lowest IoU and Precision scores, particularly for test set (3) with respective values of 0.52 and 0.31. It indicates additional challenges in accurately localizing and classifying bottle instances under more complex environmental conditions or object orientations. Jaccard Index (IoU) Dice Coefficient AUC Precision Recall F1 Score Pixel Accuracy 0.7 0.8 0.9 1.0 1.1 1.2 Metrics Value Figure 6: Overall evaluation metrics of FastSAM applied on extracted YOLO bounding boxes and compared to ground truth annotation. The blue points refers to the values of the metrics and black segments are standard deviations. In addition to YOLO evaluation, FastSAM have been analysed through the annotation of one subset to create a set of ground truth instance segmentation masks. These masks have been overlapped and grouped in a single array, followed by a conversion to binary image, which allow an overall assessment of mask prediction quality. Afterwards, FastSAM has been applied to predict the objects masks by considering the predicted 9 \fYOLO bounding boxes as inputs. The resulting mask is also converted to binary image then compared to the ground truth one. The evaluation of segmentation algorithms involves assessing various metrics to gauge their performance. The Jaccard Index (also known as Intersection over Union) and Dice Coefficient [59] are key measures that evaluate the overlap between the predicted and ground truth masks, with higher values indicating better agreement. Precision quantifies the accuracy of positive predictions, while recall measures the ability to identify all relevant instances of the object [60]. The F1 Score balances precision and recall, offering a single metric that considers both false positives and false negatives. The Area under the ROC curve (AUC) assesses segmentation performance across different threshold settings by plotting the true positive rate against the false positive rate [61]. Pixel-wise Accuracy (PA) provides an overall measure of segmentation accuracy at the pixel level [62]. Upon evaluating a segmentation algorithm, the obtained results are summarized in the Figure 6. The mean metrics demonstrate high values across various evaluation criteria: Jaccard Index (IoU) at 0.94, Dice Coefficient at 0.92, AUC at 0.95, Precision at 0.93, Recall at 0.94, F1 Score at 0.92, and Pixel Accuracy at 0.96. However, considering the standard deviation of the metrics helps in understanding the variability in the results. Despite generally favorable mean metrics, standard deviations shows some variability across evaluations (ranging from 0.12 for Pixel-wise Accuracy to 0.20) and indicates areas for potential improvement or optimization in the algorithm. Upon evaluating a segmentation algorithm, the obtained results are summarized in the Figure 6. Since standard deviation analysis assumes a Gaussian distribution of the data, any disturbance (outliers due to inaccurate FastSAM mask estimation at certain sensor\u2019s poses) can cause a mis-estimation (Example in Figure 7). In such cases, the median absolute deviation values, ranging from 0.0029 to 0.0097, provide further insight into the spread of the data and complement the standard deviation analysis. FastSAM Mis-estimation Original mask annotation Figure 7: Example of FastSAM mis-estimation of segmentation mask: (a) Original image, (b) Ground truth annotation mask, (c) FastSAM estimated mask. 4.4 3D object reconstruction and discussion The resulting mask is then aligned with depth frame using the default realsense parameters rs.align_to and K matrices [63]. The selected native resolution for both RGB and depth images are 640 \u00d7 480, which results into approximately 300k 3D points in the full-view reconstructed point-cloud. When applying the FusionVision pipeline, the background has been removed decreasing the number of points to around 32k and focusing the detection on the region of interest only, which leads to more accurate object identification. Before performing 3D object reconstruction, the point-cloud undergoes downsampling and denoising procedures for enhanced visualization and accuracy. The downsampling is achieved using Open3D\u2019s voxeldownsampling method with a voxel size of 5 units. Subsequently, statistical outlier removal is applied to the downsampled point-cloud with parameters: neighbors = 300 and standard deviation ratio = 2.0. These processes result in a refined and denoised point-cloud, addressing common issues such as noise and redundant data points. This refined point-cloud serves as the basis for precise 3D object reconstruction. The real-time performance of the YOLO and FastSAM has been approximated to 1 30.6 ms \u224832.68 fps as the image processing involves three main components: preprocessing (1ms), running the inference (27.3ms), and post-processing the results (2.3ms). When incorporating 3D processing and visualization of the raw, non-processed obtained 3D objects\u2019 point-clouds, the real-time performance decreases to 5 fps. Thus, the need to additional point-cloud post-processing including downsampling and denoising. The results are presented in Figure 8. 10 \fnon-identified section on the computer 3D point cloud Noise in the point cloud Downsampled denoised pointcloud with accurate 3D bounding box identification (b) (a) YOLO detection FastSAM deployment Binary mask Figure 8: 3D object reconstruction from aligned FastSAM mask: (a) raw point-cloud, (b) post-processing point-cloud by voxel downsampling and statistical denoiser technique. The left images visualizing the YOLO detection, FastSAM mask extraction and Binary mask estimation at specific positions of the physical objects within the frame. In Figure 8-(a), we can distinguish the presence of noises and wrong depth estimations, mainly due to the object reflectance and inaccurate calculation of disparity. Therefore, the post-processing increases the accuracy of 3D bounding box detection as shown in Figure 8-(b) while maintaining an accurate representation of the 3D object. (a) raw: 50.4% computer: 29.8% cup: 2.5% bottle: 17.3% (b) raw: 92.4% computer: 4.7% cup: 0.5% bottle: 2.3% (c) raw: 93.5% computer: 4.3% cup: 0.4% bottle: 1.8% Figure 9: post-processing impact on 3D object reconstruction: (a) raw point-clouds, (b) Downsampled pointclouds, and (c) Downsampled + denoised point-clouds. The impact of different processing techniques on the distribution of points and object reconstructions derived from a raw point-cloud is illustrated in Figure 9: (a) Raw point-cloud, (b) Downsampled point-cloud, and (c) Downsampled + Denoised point-cloud: 11 \f\u2022 In 9-(a), the raw point-cloud displays a relatively balanced distribution among different object categories. Notably, the computer and bottle categories contribute significantly, comprising 29.8% and 17.3% of the points, respectively. Meanwhile, the cup and other objects make up smaller proportions. This point-cloud presented several noise and inaccurate 3D estimation. \u2022 In 9-(b), where the raw point-cloud undergoes downsampling with voxel = 5 without denoising, a substantial reduction in points assigned to the computer and bottle categories (4.7% and 2.3%, respectively) is observed which improves the real-time performance while maintaining a good estimation of the object 3D structure. \u2022 In 9-(c), the downsampled point-cloud is further subjected to denoising. The distribution remains relatively similar to 9-(b) with a minor decreases in the computer and bottle categories (4.3% and 1.8%, respectively) while eliminating the point-cloud noise for each detected object. Table 3 summarizes the frame rate evolution when applying the FusionVision Pipeline step by step. Table 3: Summary of frame rate improvement when applying FusionVision pipeline for 3D objects isolation and reconstruction Process Processing time (ms) Frame rate (fps) Point-cloud density Raw point-cloud visualization \u223c16 up to 60 \u223c302.8 k RGB + Depth map (Without point-cloud visualization) \u223c11 up to 90 + YOLO \u223c31.7 \u223c34 + FastSAM \u223c29.7 \u223c33.7 + Raw 3D Object visualization \u223c189 \u223c5 \u223c158.4 k complete FusionVision Pipeline \u223c30.6 \u223c27.3 \u223c20.8 k The fusion of 2D image processing and 3D point-cloud data has led to a significant improvement in object detection and segmentation. By combining these two disparate sources of information, we have been able to eliminate over 85% of combined non-interesting and the noisy point-cloud, resulting in a highly accurate and focused representation of the objects within the scene. This allows the enhancement of scene understanding and enables reliable localization of individual objects, which can then be used as input for 6D object pose identification, 3D tracking, shape and volume estimation, and 3D object recognition. The accuracy and efficiency of the FusionVision pipeline make it particularly well-suited for real-time applications such as autonomous driving, robotics, and augmented reality. 5 Conclusion FusionVision stands as a comprehensive approach in the realm of 3D object detection, segmentation, and reconstruction. The outlined FusionVision pipeline, encompasses a multi-step process, involving YOLObased object detection, FastSAM model execution, and subsequent integration into the three-dimensional space using point-cloud processing techniques. This holistic approach not only amplifies the accuracy of object recognition but also enriches the spatial understanding of the environment. The results obtained through experimentation and evaluation underscore the efficiency of the FusionVision framework. First, the YOLO model has been trained on a custom-created dataset then deployed on real-time RGB frames. FastSAM model has been subsequently applied on the frame while considering the detected objects bounding boxes to estimate their masks. Finally, point-cloud processing techniques have been added to the pipeline to enhance the 3D segmentation and scene understanding. This has led to the elimination of over 85% of unnecessary point-cloud for the 3D reconstruction of specific physical objects. The estimated 3D bounding boxes of the objects defines well the shape of the 3D object in the space. The proposed FusionVision method showcases high real-time performances particularly in indoor scenarios, which could be adopted in several applications including robotics, augmented reality and autonomous navigation. Through the deployment of FusionVision (NVIDIA GPU RTX 2080 Ti with 11GB memory), it allows reaching a real-time performance of about 27.3 fps (frames per second) while accurately reconstructing the objects in 3D from the RGB-D view. Such performance underscores the scalability and versatility of the proposed framework for real-world 12 \fdeployment. As perspectives, the continuous evolution of FusionVision could involve leveraging the latest zero-shot detectors to enhance its object recognition capabilities. Additionally, the investigation of Language Model (LLM) integration for operation such as prompt-based specific object identification and real-time 3D reconstruction stands as a promising avenue for future enhancements. Acknowledgments This work has received funding from the EURAMET programme (22DIT01-ViDiT and 23IND08-DiVision) co-financed by the Participating States and from the European Union\u2019s Horizon 2020 research and innovation program." + }, + { + "url": "http://arxiv.org/abs/2311.11592v1", + "title": "Predicting urban tree cover from incomplete point labels and limited background information", + "abstract": "Trees inside cities are important for the urban microclimate, contributing\npositively to the physical and mental health of the urban dwellers. Despite\ntheir importance, often only limited information about city trees is available.\nTherefore in this paper, we propose a method for mapping urban trees in\nhigh-resolution aerial imagery using limited datasets and deep learning. Deep\nlearning has become best-practice for this task, however, existing approaches\nrely on large and accurately labelled training datasets, which can be difficult\nand expensive to obtain. However, often noisy and incomplete data may be\navailable that can be combined and utilized to solve more difficult tasks than\nthose datasets were intended for. This paper studies how to combine accurate\npoint labels of urban trees along streets with crowd-sourced annotations from\nan open geographic database to delineate city trees in remote sensing images, a\ntask which is challenging even for humans. To that end, we perform semantic\nsegmentation of very high resolution aerial imagery using a fully convolutional\nneural network. The main challenge is that our segmentation maps are sparsely\nannotated and incomplete. Small areas around the point labels of the street\ntrees coming from official and crowd-sourced data are marked as foreground\nclass. Crowd-sourced annotations of streets, buildings, etc. define the\nbackground class. Since the tree data is incomplete, we introduce a masking to\navoid class confusion. Our experiments in Hamburg, Germany, showed that the\nsystem is able to produce tree cover maps, not limited to trees along streets,\nwithout providing tree delineations. We evaluated the method on manually\nlabelled trees and show that performance drastically deteriorates if the open\ngeographic database is not used.", + "authors": "Hui Zhang, Ankit Kariryaa, Venkanna Babu Guthula, Christian Igel, Stefan Oehmcke", + "published": "2023-11-20", + "updated": "2023-11-20", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Semantic AND Segmentation AND Image", + "gt": "Predicting urban tree cover from incomplete point labels and limited background information", + "main_content": "INTRODUCTION Trees are a vital component of our ecosystems. They are vital for sustaining the biodiversity of various lifeforms and provide important services such as food, shelter, and shade [1]. In an urban setting, trees also offer benefits for physical and mental health [32]. However, trees are also highly vulnerable to change in climatic conditions [2]. Increase in global temperatures is associated with an increased global tree mortality rate, which reduces the ecosystem functioning and impacts their role in carbon storage [24]. Monitoring the amount of trees is therefore vital to devise mitigation and adaptation measures against climate change. In this paper, we propose a method to train a deep learning model to predict tree cover in an urban setting with sparse and incomplete labels. This work differs from existing studies in several key aspects. First, our work is unique in combining several different open data sources. To the best of our knowledge, no previous study has evaluated the potential of authority-managed tree records and crowd-source annotations from an open geographic database for tree mapping. Second, we focus on urban areas, which are relatively under-explored in other work [16, 29], although many free data sources exist. Third, existing work relies on strong preprocessing and fully annotated data in which the object has either been accurately delineated [7, 20] or been annotated by a bounding box or at least a point label. An example of point labels is done by Ventura et al. [30], who manually annotated 100 000 trees from eight cities in the USA and collected multiple years of imagery. Also, Beery et al. [4] incorporated different sources of public data sets, but required multiple steps of data cleaning, resulting in nearly half of the tree records being removed. In contrast, we exclusively utilize freely available data, both for input imagery and labels, which requires no annotation efforts for training. It is important to acknowledge that combining different sources of public data presents unique challenges, such as imbalanced classes and noisy labels, given that these data are not originally designed to be used together (see Fig.1). To make full use of the incomplete and sparsely labeled tree data as well as reduce the uncertainty of the background class, we proposed a mask regime that carefully selects pixels of trees and background with high probability of being that class. With this mask regime, we show that our approach is able to utilize this newly conjuncted dataset to predict urban trees with a balanced accuracy of 82% on sparsely labeled data arXiv:2311.11592v1 [cs.CV] 20 Nov 2023 \fUrbanAI \u201923, November 13, 2023, Hamburg, Germany Hui Zhang, Ankit Kariryaa, Venkanna Babu Guthula, Christian Igel, Stefan Oehmcke Figure 1: Aerial image from Hamburg, Germany with street trees overlaid in red. Street trees dataset is incomplete since it does not contain any information about the trees on private land, public parks, forests, or farms. and 84% on fully annotated data. We also introduce an objectness prior in the loss function inspired by weak supervision literature. Originally proposed in [3], pseudo-labels are derived from model predictions that are pretrained on another dataset with the same task. We derive pseudo-labels from an adapted watershed algorithm [15] to increase the extent of the object being sensed by the model for point-level supervision without requiring a pretrained model on the same task. Unlike common weak supervision scenarios that assumes sparse but fully annotated data, our incomplete annotations can lead to an incorrect objectness prior. Consequently, we also applied our mask regime to the objectness prior and restrict learning of the target class to the area close to our tree labels. This ablation study showed that the masking regime is always benifical, while the inclusion of an objectness prior is highly dependent on its quality. To summarize, our main contributions are as follows: \u2022 A novel masking approach for combining noisy crowd sourced data with precise point labels. \u2022 A dataset created from publicly available data, bringing forward the challenge of incomplete and sparse labels as well as a hand-delineated test set. \u2022 An evaluation and comparison of different techniques to include the novel masking scheme. 2 RELATED WORK As in many other fields, deep learning learning models have become state-of-the-art method for mapping trees and tree cover in aerial, satellite and LiDAR imagery. However, training these models in a supervised learning setting requires large volumes of manually annotated data, which is often tedious and expensive to create and requires domain expertise. Typically, these methods are trained with dense labels, such as full delineations of trees. For example, [7] manually annotated 89 899 trees on very high-resolution satellite imagery for training their deep learning models. Recent research shows that semiand weakly supervised learning have made great progress in the semantic segmentation of images [35]. Weakly supervised learning aims to learn from a limited amount of labels in comparison to the entire image [31, 33, 35]. Other works distinguish between different levels of weakly supervised annotations, such as bounding boxes [9], scribbles [17], points [15, 34], image labels [23], pixel-level pseudo labels generated with class activation maps [12, 25, 27], and also a text-driven semantic segmentation [18]. While fully-labelled data is limited, point labels are also used in instance segmentation methods, such as [13] introduced a novel learning scheme in instance segmentation with point labels and [14] proposed point-level instance segmentation with two branch network such as localisation and embedding branch. Interactive segmentation with point labels started a few decades back and is still an active research topic [6]. These segmentation models started training with point labels that annotate entire objects. Lin et al. [17] proposed ScribbleSup based on a graphical model that jointly propagates information from scribble and points to unmarked pixels and learns network parameters without a welldefined shape. Maninis et al. [19] proposed a framework with a point-level annotation that follows specific labeling instructions such as left-most, right-most, top, and bottom pixels of segments. Bearman et al. [3] proposed a methodology by incorporating objectness potential in the training loss function in segmentation models with image and point-level annotations. Li et al. [15] utilised an objectness prior similar to [3] but instead of a convolutional neural network (CNN) output they utilize distances in the pixel and colour space, meaning that the further away in the image and the more different the colour, the objectness decreases. Zhang et al. [34] proposed a contrast-based variational model [22] for semantic segmentation that supports reliable complementary supervision to train a model for histopathology images. Their proposed method incorporates the correlation between each location in the image and annotations of in-target and out-of-target classes. The weak supervision part of our research is inspired by [3, 15, 34], as we have only a single point for each tree, we use point labels in combination with denser background information while considering an objectness prior. In contrast to these scenarios, we consider the added challenge of incomplete annotations, meaning some relevant objects in an image might not be annotated at all. 3 OPENCITYTREES DATASET Public agencies often maintain valuable records of trees and other public attributes such as roads, parking areas, buildings, etc. These datasets are, however, often noisy due to differences in collection techniques, lack of the common data collection standards, noisy sensors, and lack of records of temporal changes. Moreover, they are mostly not developed for the goal of training supervised deep learning models or for use in conjunction with other modalities such as aerial or satellite imagery. As such, they are potentially underutilized in research. To demonstrate their usefulness, we created a new dataset for weakly supervised segmentation from such records.1 3.1 Input images To demonstrate the usefulness of public but incomplete datasets, we use the aerial images from Hamburg, Germany as input for our 1https://doi.org/10.17894/ucph.b1aa4ca2-9a4b-40d0-aa87-c760e69bf703 \fPredicting urban tree cover from incomplete point labels and limited background information UrbanAI \u201923, November 13, 2023, Hamburg, Germany models. These images contain 3 channels (RGB) at a 0.2 m/pixel resolution and they were downloaded from the data portal of the Spatial Data Infrastructure Germany (SDI Germany)2. The images were captured in May 2016. As seen in Fig. 1, the individual features such as trees, buildings, and cars on the streets are visible to human eyes. We downloaded 27 image tiles of 5000 \u00d7 5000 pixels (i.e. covering a 1 km \u00d7 1 km area ) within the bounds (9.9479070E, 53.4161744N, 9.9684731E, 53.6589539N). These tiles extend from the north to the south border of Hamburg but are limited to 1 km strip close to the city center. Hamburg is situated on the coast of the Elbe river with a densely populated city-center. Along it\u2019s border (i.e. away from the city center), the city-state also contains suburbs, farms and forested area. The chosen images capture all these different characteristics of the city along the north-south gradient. Since images are captured in early spring, many of the trees are without leaves, making certain trees more challenging to identify. When designing the dataset, we considered that additional height data derived from LiDAR could potentially enhance results. However, we decided against including it because there are practical constraints associated with LiDAR data availability and collection. High-resolution LiDAR data (e.g., submeter similar to our RGB source) often remains inaccessible due to regulatory limitations, especially concerning drone or plane flights over urban areas. Additionally, acquiring LiDAR measurements is more costly compared to RGB measurements, which could make frequent temporal analyses of urban tree cover infeasible. For instance, the open data portal we used, does not have submeter height measurements available for Hamburg. 3.2 Label data Two sources of labeled data are combined: Ground truth for trees. The Authority for Environment and Energy of the city of Hamburg maintains a list of all street trees3 as recorded on the 6th of January 2017. The dataset contains various attributes of individual trees such as location, height, width, species, age, and condition. However, as the name suggests, this information is limited to the trees along the streets of Hamburg and does not include information about trees on private land, in public parks, or in forested areas. Unlike other data usually used in point-supervision where each object is assumed to be annotated with at least one point, we have incomplete annotations, increasing the ambiguity of the background class. In the area of interest, the dataset contains information about 11 366 trees. These trees are from 136 unique species. In Fig. 1, trees in the street trees dataset are overlaid in red circles. Each tree is provided as a point referenced in a local reference system (EPSG:32632 WGS 84). However, the point location of the tree label can be inaccurate, for example, it might not overlap with the center of the tree or, in the worst case, any part of the trees due to the geo-location errors. Another challenge with the dataset is that distribution of species of the street trees may vary significantly from the distribution of trees species in forests, parks, farms, or gardens. As a second source of ground truth data, we use OpenStreetMap (OSM) [10]. Within these bounds and the tag \u2019natural\u2019:\u2019tree\u2019, OSM 2https://www.geoportal.de/portal/main/ 3http://suche.transparenz.hamburg.de/dataset/strassenbaumkataster-hamburg7 contains the location of 6375 trees. Out of these, 145 trees contain species information (24 unique species). These OSM data offer information regarding trees in private and public areas along with street trees. Table 1: Description of objects defining the non-tree class. The buffer distance is in meters and negative buffers shrinks the object. Only vectors with non-negative area were chosen. Type OSM tag Count Buffer Comments Buildings \u2019building\u2019:True 23 075 \u22125m Buildings of all types Roads \u2019highway\u2019:True 111 \u22127m Mostly around parking areas or bus terminals Sports pitches \u2019leisure\u2019:\u2019pitch\u2019 135 \u22127m Soccer pitches and similar types of grass surfaces Ground truth for non-trees classes . Table 1 provides an overview of the objects that we use to define the non-tree class. The non-tree classes are mostly dominated by buildings which provide relevant information about different construction material and roof types. While the area contains abundant roads, it is a tricky class to consider for the true negatives since the trees are often planted next to the roads and large parts of tree canopies overlap with roads. We only used road data if they had an associated area (i.e. stored as polygon or multi-polygon). We used OSMNX library to download data from OSM [5]. Sports pitches, which includes grassed surfaces such as soccer pitches, are limited to 135 instances and it is only classes that provides information on grass which is easy to confuse with trees. 3.3 Challenging aspects of the data By combining tree inventories and geographic data from existing public records, we create a rich dataset, without the need for additional acquisition of labor-intensive annotations. Public records maintain valuable information about trees and other public attributes. However, using incomplete public records for tree prediction also introduces a number of challenges: Sparse labels: The ground truth of trees are given as point labels that cover most public streets, some public parks, and a few private places. These annotations are incomplete and only represent a small portion of urban trees. In addition, these street trees are also sparsely distributed. Presence of noise: Although the tree census data and aerial images are obtained from relatively close point of times, it is important to note that changes in the tree population might have occurred during the time gap. Trees could have been removed, died, or new trees might have been planted. Besides, there are geo-location errors as mentioned before, any nearby pixel of the tree could be labeled as the tree centroid. Image quality: The quality of aerial imagery can vary for different tree species. The images were captured in early spring, when deciduous trees have not yet grown leaves. In addition, renewal of growth in trees near streets may be influenced by extended period \fUrbanAI \u201923, November 13, 2023, Hamburg, Germany Hui Zhang, Ankit Kariryaa, Venkanna Babu Guthula, Christian Igel, Stefan Oehmcke Figure 2: Objectness prior maps (col 2) and instance areas (col 4) were generated using input images (col 1) and locations of tree centroids (col 5) according to [15]. The boundaries (col 3) are derived where instance areas are touching. of illumination and emissions from the streets [21]. As a result, these trees may not be well represented in the aerial image. Invisible trees: There are trees located within shadows of nearby tall buildings, darkening the image and increasing potential class confusion between tree and shadows. 4 LEARNING FROM INCOMPLETE & SPARSE LABELS Our main challenge in training a tree segmentation model is obtaining accurate and effective labels. Coming from open-data sources, however, labels are incomplete, meaning that not all trees or nontree objects in an area are annotated. These incomplete labeling deviates from the typical definition of weakly supervised learning [3, 15], where we assume sparse labels (e.g., points, scribbles, bounding boxes, ...) are available for every relevant object. In addition, tree labels are only available on a point level, meaning a single point represents a tree although the tree canopy encompasses a larger area. The non-tree labels are taken from OSM thus describing only parts of the image, in addition, we chose to shrink their shape to avoid overlap with potential tree that are not covered by our dataset (e.g., a tree reaching over a building), see Table 1. We frame the learning task as binary semantic segmentation of trees and introduce concepts to deal with the incomplete sparse labels. To that end, we consider a training set \ud835\udc47= {(\ud835\udc991,\ud835\udc9a2), . . . , (\ud835\udc99\ud835\udc5b,\ud835\udc9a\ud835\udc5b)} \u2282\ud835\udc4b\u00d7 \ud835\udc4c with images \ud835\udc99\ud835\udc56\u2208\ud835\udc4b= R\ud835\udc64\u00d7\u210e\u00d7\ud835\udc50, a segmentation mask \ud835\udc9a\ud835\udc56\u2208\ud835\udc4c= {\u22121, 1}\ud835\udc64\u00d7\u210e, and number of samples \ud835\udc5b. Further, \ud835\udc64,\u210e, and \ud835\udc50correspond to the width, height, and number of input channels, respectively. The pixels containing the non-trees objects are considered as the negative class samples. In our training dataset, we treat the pixels in a 60 cm radius (7 \u00d7 7 pixels) around the point coordinate of a tree as positive class labels, which increases the number of positive training labels substantially. Training in such a setting is non-trivial. For example, learning a semantic segmentation only given point labels is challenging because information about the spatial expand of the objects in question is limited. Previous research introduces this spatial expand information by means of an objectness prior. The objectness prior gives an estimate of the class likelihood per pixel. As shown in Figure 2, given the location of the trees, the algorithm estimates the potential spatial extent for each tree. This prior can come from pretrained models on similar tasks [3], but also from classic algorithms, e.g. inspired by watershed segmentation [15]. Our approach uses these two loss functions in conjunction: L =Lsup(\ud835\udc53(\ud835\udc99) \u2299\ud835\udc8e,\ud835\udc9a\u2299\ud835\udc8e) + Lobj(\ud835\udc53(\ud835\udc99) \u2299\ud835\udc93, \ud835\udc90\u2299\ud835\udc93, \u02c6 \ud835\udc93\u2299\ud835\udc93) \u00b7 \ud835\udefd, (1) where Lsup is the supervised loss (e.g., binary cross entropy (BCE) loss) that learns from the labeled data and the objectness loss Lobj, where this prior information is utilized. Here, \u2299denotes the selection operator that chooses elements where the learning mask \ud835\udc8eis set to 1 and returns the elements as a flattened vector. The parameters of Lobj are the predictions \ud835\udc53(\ud835\udc99), the objectness prior \ud835\udc90, and an instance region \u02c6 \ud835\udc93. Note, since no pre-trained CNN [3] on tree segmentation was easily available, we utilized the method described by [15] to calculate \ud835\udc90and \u02c6 \ud835\udc93. To obtain \ud835\udc90, we calculate the distance matrix \u0394 \u2208R\ud835\udc64,\u210eby applying the adjusted watershed algorithm [28] as in [15] with the point labels being used as markers and then transforming these distances into a pseudo-probability distribution \ud835\udc90= \ud835\udc52\u2212\ud835\udefc\u03942, with \ud835\udefc= 10 to create fast decay of values the farther away from an actual label. The current settings for these pseudo-probabilities were explored during a preliminary study on the training set but the ones provided by [15] turned out to perform best. From the same adjusted watershed output, we use the watershed instance assignments as \u02c6 \ud835\udc93. \ud835\udefdis trade-off parameters to change the influence of the objectness loss. See Figure 2 for an exemplary input and objectness-related attribute. In our incomplete label setting, the generated objectness can only capture the trees indicated by point labels. Therefore, to represent where labels are available, we declare two learning masks \ud835\udc8e\u2208{0, 1}\ud835\udc64\u00d7\u210eand \ud835\udc93\u2208{0, 1}\ud835\udc64\u00d7\u210e, where 1 means a label is present and 0 corresponds to missing label information. These masks can be defined in several ways as we explore in the experimental section and can be considered one of main contributions of this paper. For the objectness loss, we extend the binary cross-entropy similarly to [3, 15] Lobj = \u22121 |\ud835\udc90| |\ud835\udc90| \u2211\ufe01 \ud835\udc56=1 BCE(\ud835\udc53(\ud835\udc99) \u2299\u02c6 \ud835\udc93\ud835\udc56, \ud835\udc90\u2299\u02c6 \ud835\udc93\ud835\udc56) , (2) where each tree instance is calculating its own loss value depending on the instance region \u02c6 \ud835\udc93\u2208{0, 1}|\ud835\udc90|\u00d7\ud835\udc64\u00d7\u210e. The number of tree instance |\u02c6 \ud835\udc93| changes for each sample, as does the number of pixels in each region. Averaging inside the instance sum effectively weights each instance the same, regardless of size. \fPredicting urban tree cover from incomplete point labels and limited background information UrbanAI \u201923, November 13, 2023, Hamburg, Germany 5 EXPERIMENTAL EVALUATION IN HAMBURG 5.1 Ablation study The choice of masks \ud835\udc8eand \ud835\udc93is crucial in our sparse and incomplete label setting. To that end, we compared five different training scenarios as shown in Table 2. The baseline scenario is only using the supervised loss without any masking. The public authority and OSM tree labels are expanded from a point to a disk \ud835\udc8edisk of radius 1.5 m, which is indicated by \ud835\udc8edisk = 1. The second scenario, called Obj, uses the objectness loss over the entire image in combination with supervised loss and we mask out all pixels with negative labels except on the boundaries of the instance region \u02c6 \ud835\udc93(see Figure 2). Obj is a reimplementation of [15]. The third scenario uses the supervised loss along with our proposed masking scheme, termed Mask. Here we do not consider the objectness loss and only evaluate the supervised loss where we have positive labels (indicated by \ud835\udc66= 1), and where we have information about the shrunken OSM non-tree objects \ud835\udc8eOSM, which is indicated by \ud835\udc8eOSM = 1. In addition, in shrunken OSM non-tree objects, we remove negative pixels that are within 1.5 m of a positive label. In the fourth scenario, we combine our masking approach with objectness in MaskObj, by employing the objectness loss but restricting it to the 1.5 m radius around the positive labels. Lastly, we add an additional constraint to the objectness by ignoring all the pseudo-probabilities that are below 0.2, which will reduce the learning about the negative class in Lobj, which we refer to as MaskObjThresh. 5.2 Network architecture, loss function, and hyperparameters To address the tree segmentation task, we employ a fully-convolutional network based on the U-Net architecture [26]. Experimental settings. Among other things, in the past UNets have been used for semantic segmentation of trees in satellite imagery [7]. We adapted the U-Net architecture by applying batch normalization [11] instead of dropout layers and replacing ReLU with ELU [8] as activation functions. We use binary cross entropy (BCE) as loss function as our supervised error measure. The aerial images were split into 300 \u00d7 300 patches and a batch consists of 36 patches. For training and hyperparameter optimization, the dataset was split into 80% training set (3566 patches), 20% validation set (788 patches). To improve training stability, we accumulated gradients over 14 batches (i.e 504 images) before the optimizer step. The model is trained for 500 epochs and the final weights were chosen w.r.t. the best recall score on the validation set. Evaluation on sparse and dense labels. The evaluation of the model\u2019s performance was done with two types of data annotations, point annotation, and dense object annotation. These two datasets are spatially independent. First, we evaluated the model on the point-annotated data from 28 tiles (4169 patches) within the bounds (9.962748E, 53.407065N, 9.83603E, 53.658832N), which is a 1 km \u00d7 28 km stripe adjacent to the training data stripe. None of these tiles were used for training or intra-model validation and the ground truth dataset for them was created in exactly the same way as described in Section 3. None of the pseudo labeling (e.g., extending of point labels to 4 pixels or a disk as label) was utilized (a) Example 1 (b) Example 2 (c) Example 3 Figure 3: Three examples of target (top) and possible predicted segmentation (bottom). The predicted positive class is overlayed in green. In the target examples, a red overlay indicates the negative class and transparency means the learning mask is 0. For the predicted segmentation, only the positive class is shown and the negative class is transparent. during evaluation, meaning that for point labels only the corresponding pixel is considered and for the background class only the negatively buffered area. The sparse street tree dataset and OSM had information on 14 137 trees within these bounds. To evaluate our models performance on dense object prediction, we manually annotated a tile within a 1 km2 area , which is 3 km to the east of our training data. The delineation work was done using QGIS and is mainly based on the input image, which was crossreferenced with Bing and Google Satellite Maps. The annotation was then verified within the authors\u2019 group, which eliminates some bias. To utilize this dataset for an unbiased tree cover estimate, we split it further into a model selection set and a test set. We applied only the best model from the model selection set in terms of IoU to the test set. 5.3 Sparse Label Results The results of the sparse label test set are given in Table 2. It is crucial to acknowledge the highly imbalanced nature of the dataset when evaluating with sparse labels. Due to this significant imbalance, the number of false positives can be far greater than the true positive, leading to a substantially low precision value. Specifically, due to the class imbalance with 2448 times more negative class pixels than positive class pixels, the precision of our models was only around 3%. Therefore we focus on the recall (sensitivity) of the target class and balanced accuracy (BA) to evaluate the model performance on sparse labels. The baseline model performed worst and appears to mainly predict the background class. Performing best was the Mask model w.r.t. recall with 90% and MaskObj w.r.t. BA with 84%. Even though the BA of Obj is close to the mask models with 78%, the recall value is comparatively low with 59%. In Figure 3 exemplary target and prediction segmentation masks are shown. \fUrbanAI \u201923, November 13, 2023, Hamburg, Germany Hui Zhang, Ankit Kariryaa, Venkanna Babu Guthula, Christian Igel, Stefan Oehmcke Table 2: Results on sparse and delineated data across different ablation settings. For model selection set of the delineated labels, we compare intersection over union of the tree (IoU) of the tree class, F1, and balanced accuracy (BA) scores. The sparse labels are compared w.r.t. their recall and BA scores. Additional masks are the shrunken OSM non-tree objects \ud835\udc8eOSM \u2208{0, 1}\ud835\udc64\u00d7\u210e, a 1.5 m disk \ud835\udc8edisk \u2208{0, 1}\ud835\udc64\u00d7\u210earound each positive values in \ud835\udc66, the bounds \ud835\udc83between instances derived from the instance region map \ud835\udc93, and a mask of ones 1\ud835\udc64\u00d7\u210e\u2208{1}\ud835\udc64\u00d7\u210e. Results of the baseline model were not calculated for the delineated data set since the model was discarded due to the sparse label performance. Name Description Setting Delineated Sparse IoUd F1d BAd Recalls BAs baseline Neither masking nor objectness and enlarging positive labels to a circle with 1.5 m radius. \ud835\udefd= 0 \ud835\udc8e= 1\ud835\udc64\u00d7\u210e \u2014 \u2014 \u2014 0.0005 0.5002 \ud835\udc9a= \ud835\udc8edisk Obj Reimplementation of [15]. \ud835\udefd= 1 \ud835\udc8e= \ud835\udc9a\u222a\ud835\udc83 0.1191 0.5479 0.5575 0.5908 0.7844 \u02c6 \ud835\udc93= 1\ud835\udc64\u00d7\u210e Mask Only supervised loss and masking out unknown areas. \ud835\udefd= 0 (ours) \ud835\udc8e= \ud835\udc9a\u222a(\ud835\udc8eOSM \\ \ud835\udc8edisk) 0.4839 0.7551 0.8119 0.8994 0.8205 MaskObj As [15] but restricting objectness loss to 1.5 m radius around points. \ud835\udefd= 1 (ours) \u02c6 \ud835\udc93= \ud835\udc8edisk 0.3364 0.7289 0.6978 0.7771 0.8399 \ud835\udc8e= \ud835\udc9a\u222a(\ud835\udc8eOSM \\ \ud835\udc8edisk) MaskObjThresh As MaskObj but removing objectness smaller than the specified threshold (\ud835\udc61= 0.2). \ud835\udefd= 1 (ours) \u02c6 \ud835\udc93= \ud835\udc8edisk \u2229(\ud835\udc90\u2265\ud835\udc61) 0.4805 0.7660 0.7870 0.8345 0.8135 \ud835\udc8e= \ud835\udc9a\u222a(\ud835\udc8eOSM \\ \ud835\udc8edisk) 5.4 Delineation Results The intersection over union (IoU), F1, and BA score on the model selection set can be seen in Table 2. We omitted the baseline because of the subpar results on the sparse test set. The class imbalance changes since these annotation are fully delineated, particularly, we now have 3.42 negative pixels for one positive pixel. This change makes the use precision viable, which is why we consider the F1score. The original masking model Mask performed best in IoU and BA, even though the difference in IoU compared to MaskObjThresh is marginal. Obj only shows a BA of 56% which is much smaller than the 78% on the sparse data, showing a lack of generalization for this approach. Using any kind of masking scheme seems to improve the results. In Figure 4 we show the normalized and unnormalized confusion matrix results on the model selection set. Note that Obj and MaskObj underpredicts the positive class, which corresponds to the low accuracy and a false-negative rate. For MaskObjThresh and Mask the accuracy of the positive class is considerably higher, even though the false positive rate increases. Interestingly, thresholding the objectness prior improves performance compared to the model without, indicating that the prior hold misleading information regarding the background class. Based on the results on the model selection set, we decided that the best model is the one without any objectness loss but with masking (Mask). This decision is based on the better metrics but also on the simpler learning setup. Our masking regime only requires a sensible choice for the excluded areas, while the objectness prior additionally requires a choice of how to calculate it. For delineated test set the Mask model achieved an IoU of 0.4253, F1 score of 0.7393 and BA of 83.63%. These results are comparable to the other sets, indicating a good generalization performance. Finally, the predictions on the dense label set are shown in Figure 5. Within the bounds of the test area (591 609 m2), we detected a total tree cover of 177 142 m2 (29.9%) compared to the annotated 96 946 m2 (16.4%). This shows an overestimation but as seen in the map, some of the trees were missed in the annotated dataset or were difficult to delineate without ambiguity. 5.5 Discussion of Ablation results Baseline. The baseline segmentation model trained on incompletely labeled data with ambiguous information about the target class and the background class is not able to learn tree features thus predicts all pixels as background. As shown in the first row of Table 2, the recall of the target class is close to 0 when evaluated on the sparse labels. The model mainly predicts the background class, which aligns with our assumption that the class imbalance without our masking is challenging to overcome. Effect of Objectness Prior. By introducing the objectness prior in the Obj model, pixels spatially and chromatically close to the tree centroid are given higher probability of being that tree. Effectively, it expands the learning from only known tree pixels to also include possible pixels. However, our task is different from [15] because many objects are unlabeled, which makes the boundaries generated from instance areas less representative of the background class. \fPredicting urban tree cover from incomplete point labels and limited background information UrbanAI \u201923, November 13, 2023, Hamburg, Germany 0.79 38.5e+5 0.21 10.4e+5 0.16 2.3e+5 0.84 12.0e+5 True Label Tree Non-tree Non-tree Tree Predicted (a) Mask 0.99 48.6e+5 0.01 0.3e+5 0.88 12.6e+5 0.12 1.7e+5 True Label Tree Non-tree Non-tree Tree Predicted (b) Obj 0.96 47.1e+5 0.04 1.8e+5 0.62 8.8e+5 0.38 5.4e+5 True Label Tree Non-tree Non-tree Tree Predicted (c) MaskObj 0.86 41.9e+5 0.14 7.1e+5 0.28 4.0e+5 0.72 10.3e+5 True Label Tree Non-tree Non-tree Tree Predicted (d) MaskObjThresh Figure 4: Confusion matrix across different ablation settings, created from the delineated model selection dataset. The normalized confusion matrix is shown in bold font (top number) and through color, while the lower number represents absolute number of samples. (a) Mask: Our initial method that utilizes masking, (b) Obj: Model as presented in [15], (c) MaskObj: Combing masking and objectness, (d) MaskObjThresh: Restricting learning to positive labels for objectness loss. We do not show the baseline model here, because the model mainly predicted the negative class (e.g., in the first column both values are close to 1). Our results show that objectness helps when training in a weakly supervised setting with imbalanced data, but the incomplete labeling is not accounted for in this case and the models including our masking regime (MaskObj and MaskObjThresh) outperform the Obj model. Effect of Mask. The masking regime is crucial in an imbalanced and sparsely labeled data setting. Without masking, the baseline model failed to predict the target class completely. As shown in Table 2, the mask regime introduced more performance improvements than the objectness prior, with an significantly improved IoU score for models with masking. Among those models, the Mask model achieved the highest IoU and BA on the delineated data and best recall score on the sparse data. In summary, the mask regime is comparatively simple to implement, costs less computational resources, and requires less fine tuning than using the objectness prior. Effect of combining Objectness Prior and Mask. The mask regime consistently improved the performance of the models learning from the objectness prior, but there was no improvement compared to the model applying masking without the objectness prior. A possible reason for this might be that in our complicated urban environment setting, the spatial context of trees might vary quite a bit (e.g., small and large trees) and there is a chance that the objectness map would highlight the object around a tree that is actually not a tree (see Figure 2). Since the extend and cutoff of the probabilities depend on hyperparameters when creating the prior, there could be settings giving a good performance for one but not all the different scenarios. This brings us to the conclusion that the objectness prior in its current form does not yield any benefits compared to simply applying the mask regime on its own. 6 CONCLUSION In this paper, we create a new tree segmentation dataset from public data to train a deep learning model for semantic segmentation of urban trees. This dataset is challenging because it consists of incomplete and sparse point labels for trees and carefully selected background objects from OSM. We address this label incompleteness and sparsity by proposing a loss masking regime into our model design including domain knowledge. Further, we expand on the weakly supervised technique of learning from objectness priors by utilizing the same masking regime. Our evaluation shows that the best performance was achieved when only using our masking regime, with a test performance on a fully delineated set of 0.43 IoU and 84% balanced accuracy. This indicates that the chosen objectness prior was not helpful for this task while our mask regime is beneficial when dealing with incomplete and sparsely annotated data. This even holds when combining it with other approaches, such as the Obj model. Besides, our mask regime is simple to implement, lower in computational resource requirements, and requires less fine-tuning of hyperparameters. While including Obj or its variants requires to identify and compute an appropriate objectness prior \ud835\udc5cfor each new task and training sample. This overhead is an inhereent aspect of the objectness methods. Our results on both point labels and a manually delineated evaluation set demonstrates the hidden potential of public datasets for mapping urban trees. In the future, we will investigate the usefulness of self-supervised networks pretrained on large datasets in conjunction with weak labels to further improve the mapping performance. Evaluation in other urban areas would also be of interest to validate the generalization performance. Since our current masking prior is calibrated to our Hamburg dataset, it may not transfer well to new areas. Therefore we would like to investigate if the masking prior could be learned or adjusted from unsupervised or semi-supervised networks. ACKNOWLEDGMENTS This work was supported by the research grant DeReEco from VILLUM FONDEN (grant number 34306), the PerformLCA project (UCPH Strategic plan 2023 Data+ Pool), and the grant \u201cRisk-assessment of Vector-borne Diseases Based on Deep Learning and Remote Sensing\u201d (grant number NNF21OC0069116) by the Novo Nordisk Foundation." + }, + { + "url": "http://arxiv.org/abs/2404.10322v1", + "title": "Domain-Rectifying Adapter for Cross-Domain Few-Shot Segmentation", + "abstract": "Few-shot semantic segmentation (FSS) has achieved great success on segmenting\nobjects of novel classes, supported by only a few annotated samples. However,\nexisting FSS methods often underperform in the presence of domain shifts,\nespecially when encountering new domain styles that are unseen during training.\nIt is suboptimal to directly adapt or generalize the entire model to new\ndomains in the few-shot scenario. Instead, our key idea is to adapt a small\nadapter for rectifying diverse target domain styles to the source domain.\nConsequently, the rectified target domain features can fittingly benefit from\nthe well-optimized source domain segmentation model, which is intently trained\non sufficient source domain data. Training domain-rectifying adapter requires\nsufficiently diverse target domains. We thus propose a novel local-global style\nperturbation method to simulate diverse potential target domains by\nperturbating the feature channel statistics of the individual images and\ncollective statistics of the entire source domain, respectively. Additionally,\nwe propose a cyclic domain alignment module to facilitate the adapter\neffectively rectifying domains using a reverse domain rectification\nsupervision. The adapter is trained to rectify the image features from diverse\nsynthesized target domains to align with the source domain. During testing on\ntarget domains, we start by rectifying the image features and then conduct\nfew-shot segmentation on the domain-rectified features. Extensive experiments\ndemonstrate the effectiveness of our method, achieving promising results on\ncross-domain few-shot semantic segmentation tasks. Our code is available at\nhttps://github.com/Matt-Su/DR-Adapter.", + "authors": "Jiapeng Su, Qi Fan, Guangming Lu, Fanglin Chen, Wenjie Pei", + "published": "2024-04-16", + "updated": "2024-04-16", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Semantic AND Segmentation AND Image", + "gt": "Domain-Rectifying Adapter for Cross-Domain Few-Shot Segmentation", + "main_content": "Introduction Benefiting from well-established large-scale datasets [1, 24], numerous semantic segmentation methods [5, 30, 36] *Both authors contributed equally. \u2020Corresponding author. test (b) Adapt the entire model to new domains Model train & adapt test (a) Train and test the model on the same domain train Model S D Multiple S D (c) Train an adapter to rectify diverse target domains (ours) train & adapt rectify train simulate rectify & test adapter Model T D S D T D SIM D Diverse S D Target domain T D S D Source domain Simulated target domain SIM D Figure 1. The comparison of our method with other approaches. (a) Traditional few-shot segmentation (FSS) methods train and test the model on the same domain. (b) Most domain generalization (DG) methods leverages multiple source domains to train and adapt the large-parameter model simultaneously. (c) In contrast to conventional DG methods, we propose using a lightweight adapter as a substitute. This adapter is designed to adapt to various domain data, thereby decoupling domain adaptation from the source domain training process. have undergone rapid development in recent years. However, obtaining enough labeled data is still a challenging and resource-intensive process, particularly for tasks like instance and semantic segmentation. Unlike machine learning approaches, human capacity to recognize novel concepts from limited examples fuels considerable research interest. Hence, few-shot segmentation (FSS) is proposed to meet this challenge, developing a network that generalizes to new domains with limited annotated data. Nonetheless, most existing few-shot segmentation methods [12, 21, 31, 32, 34, 41, 51, 53, 54] often exhibit subpar performance when confronted with domain shifts [25, 37, 50]. The cross-domain few-shot segmentation (CD-FSS) is thus proposed for generalizing few-shot segmentation models from the source domain to other domains [20, 47]. CDarXiv:2404.10322v1 [cs.CV] 16 Apr 2024 \fFSS trains the model solely on the source domain, and generalizes the trained model to segment object of novel classes in a separate target domain, supported by few-shot samples. Domain adaptation(DA) and domain generalization(DG) are closely related to cross-domain few-shot segmentation. However, DA methods require unlabeled training data from the target domain. DG aims to generalize models trained in the source domain to various unseen domains, often requiring extensive training data from multiple source domains. Consequently, DA/DG methods typically adapt the entire model to new domains, leveraging substantial domain-specific data. Similarly, most existing CD-FSS methods adapt the entire model to target domains. However, in few-shot learning, the scarcity of training data can lead to overfitting when directly adapting the entire model. Rather than generalizing the entire model, our approach focuses on adapting a compact adapter to rectify diverse target domain features to align with the source domain. Once rectified to the source domain, target domain features can effectively utilize the well-trained source domain segmentation model, which is intently optimized using extensive source domain data. Figure 1 shows the difference among our method and conventional FSS and domain generalization methods. Training a domain-rectifying adapter requires extensive data of diverse target domains. The straightforward featurelevel domain synthesis method can effectively generate diverse potential target domains by randomly perturbing feature channel statistics. We can diversify the synthesized domain styles by increasing the magnitude of perturbation noises. However, as shown in Figure 2, some feature channels in individual images exhibit very low activation values. These small feature channel statistic values result in the corresponding channels suffering from limited style synthesis. Merely increasing the perturbation noises may lead to model collapse, where highly activated channels are excessively perturbed. Consequently, we propose a novel localglobal style perturbation method to generate diverse potential target domain styles. Our local style perturbation module generates new domains by perturbing the feature channel statistics of individual images, similar to DG methods. Our global style perturbation module effectively diversifies the synthesized styles by leveraging the collective feature statistics of the entire source domain. Dataset-level feature statistics are estimated through momentum updating on the entire source domain dataset. Our local and global style perturbation modules collaboratively generate diverse and meaningful domain styles. The perturbed feature channel statistics represent diverse potential styles, which are then input into the adapter to train the domain-rectifying adapter. The adapter predicts two rectification vectors to rectify the perturbed feature channel statistics to their original values. Additionally, we propose a cyclic domain alignment module to assist the 0 5 10 15 20 25 30 Channel Index 0.0 0.1 0.2 0.3 0.4 Channel Statistic Values Channel Statistics Comparison Average statistic Individual sample statistic Figure 2. We show the feature channel statistic of an individual sample\u2019s statistic and the average statistic across the dataset on the pretrained backbone at stage 1. The average statistics exhibit a smoother profile compared to that of an individual sample, allowing for the application of more substantial noise to the feature with the smoother statistics. adapter in learning to effectively rectify diverse domain styles to align with the source domain. Once rectified, the feature channel statistics will collaborate with the normalized feature map to train the segmentation model. During inference, we can directly use the domain-rectifying adapter to align the image features with the source domain and then input them into the well-trained source domain model for segmentation. In summary, our contributions are: \u2022 We introduce a novel domain-rectifying method for crossdomain few-shot segmentation, employing a compact adapter to align diverse target domain features with the source domain, mitigating overfitting in limited training data scenarios. \u2022 We propose a unique local-global style perturbation module that generates diverse target domain styles by perturbing feature channel statistics at both local and global scales, enhancing model adaptability to various target domains. \u2022 To enhance domain adaptation, we introduce a cyclic domain alignment loss that helps the domain-rectifying adapter align diverse domain styles with the source domain. 2. Related Work 2.1. Few-Shot Segmentation Few-shot semantic segmentation [26\u201329, 41, 46, 49, 53, 57], using a limited number of labeled support images, predicts dense masks for query images. Previous methods primarily adopted a metric-based paradigm [8], improved in various ways, and fell into two main categories: prototypebased and matching-based approaches. Motivated by PrototypicalNet [40], prototype-based methods extract prototypes from support images to guide query object segmentation. Most studies concentrate on effectively utilizing limited support images to obtain more representative pro\ftotypes. Recent studies [52, 54] emphasize that a single prototype often fails to represent an entire object adequately. To address this, methods such as ASGNet [21] and PRMMs [41] explore using multiple prototypes to represent the overall target. On the other hand, matching-based methods [31, 32, 43] concatenate support and query features, subsequently inputting the concatenated feature map into CNN or transformer networks. This process explores the dense correspondence between query images and support prototypes. Recently, researches [35, 39] has focused on leveraging pixel-to-pixel similarity maps for effective support prototype generation and query feature enhancement. 2.2. Domain Generalization Domain Generalization (DG) targets at generalizing models to diverse target domains, particularly when target domain data is inaccessible during training. Existing domain generalization methods fall into two categories: learning domaininvariant feature representations from multiple source domains [9, 14, 33, 45] and generating diverse samples via data or feature augmentation [4, 38, 44, 58]. The core idea of learning domain-invariant features is to leverage various source domains to learn a robust feature representation. Data or feature augmentation aims to increase the diversity of training samples to simulate diverse new domains. Domain generalization is particularly challenging in few-shot settings, as the target domain substantially differs from the source domains in both domain style and class content. Unlike popular DG methods generalizing the entire model, we train a small adapter to rectify the target domain data into the source domain style for model generalization. 2.3. Cross-domain Few-Shot segmentation Recently, cross-domain few-shot segmentation has received increasing attention. PATNet [20] proposes a feature transformation layer to map query and support features from any domain into a domain-agnostic feature space. RestNet [17] addresses the intra-domain knowledge preservation problem in CD-FSS. RD [47] employs a memory bank to restore the meta-knowledge of the source domain to augment the target domain data. Unlike previous CD-FSS methhods, our method directly learns two rectification parameters for effective domain adaptation, eliminating the needs of restoring source domain styles. 3. Methodology Problem Setting Cross-Domain Few-Shot Segmentation (CD-FSS) aims to apply the source domain trained few-shot segmentation models to diverse target domains. The CDFSS model is typically trained using episode-based metalearning paradigm [11, 43]. The training and testing data both consist of thousands of randomly sampled episodes, including K support samples and one query image. The model first extracts the support prototype and query feature from each training episode, and then performs pixel-wise feature matching between the support prototype and query feature to predict the query mask. The support prototype is typically a feature vector aggregating the object features of all support images. Once trained, the model is directly applied to various target domains. Method Overview Our key idea is to train a adapter to rectify diverse target domain styles to the source domain, and leverage the well-trained source domain segmentation model to process the rectified target domain features for accurate few-shot segmentation. The crux is to align diverse potential target domain distributions to the source domain distribution. To train the domain-rectifying adapter, we thus synthesize various target domain styles by perturbing the feature channel statistics of the source domain training images. And the adapter is trained to rectify the synthesized feature styles to the source domain style. During inference, the adapter can be directly applied on the target domain features to rectify their domain styles, and the subsequent segmentation model can process the rectified support and query features for few-shot segmentation. he overall framework of our approach is illustrated in Figure 3. 3.1. Local Domain Perturbation Previous works [13, 59] show that perturbing feature channel statistics can effectively synthesize diverse domain styles and meanwhile preserves the image contents. We thus synthesize various domain styles by injecting gaussian noises into feature channel statistics of source domain images. Given a feature map Fo \u2208RB\u00d7C\u00d7H\u00d7W , we first compute the feature channel statistics, i.e., mean \u00b5o and variance \u03c3o along each channel dimension: \u00b5o(Fo) = 1 HW H X h=1 W X w=1 Fo, (1) \u03c3o(Fo) = v u u t 1 HW H X h=1 W X w=1 (Fo \u2212\u00b5o(Fo))2 + \u03f5, (2) where \u00b5o, \u03c3o \u2208RB\u00d7C, \u03f5 is a small constant for numerical stability, B, C, H, and W represent the batch size, channel dimension, height, and width of the feature map. Then, we leverage two perturbation factors \u03b1 and \u03b2 to control the gaussian noise injection process for \u00b5o and \u03c3o. The noise vectors, sharing the same dimension as \u00b5o and \u03c3o, are used to compute the perturbed mean \u00b5p and variance \u03c3p: \u00b5p = (1 + \u03b1)\u00b5o, \u03c3p = (1 + \u03b2)\u03c3o. (3) \f\u03bcs \u03c3s Individual style N(0,0.75) momentum update Average style Pglobal<0.5 Plocal<0.5 Source Domain Episode blockx-1 Adapter \u03bcrect,\u03c3rect ... blockx (Plocal>0.5) \u2229 (Pglobal>0.5) \u03bca \u03c3a ... p(x) Rectificaiton Module Perturbation Module Perturbed Statistics Perturbed Features Original Statistics Convolution Layer Figure 3. Overview of our cross-domain few-shot segmentation approach. Our method consists of two modules: a feature perturbation module and a feature rectification module. The former is used to generate simulated domain features, while the latter trains the adapter by restoring the features to their original states. During the perturbation process, we employ both local and global perturbations, controlled by two different probabilities P to decide if a feature is perturbed. Note that when both probabilities exceed 0.5, the entire backbone undergoes standard training. During testing, we treat target domain features as perturbed features and directly rectify them using the adapter. We can obtain the perturbed feature map Fp by replacing the feature channel statistics {\u00b5o, \u03c3o} of the original feature map Fo with the perturbed channel statistics {\u00b5p, \u03c3p} using the Adaptive Instance Normalization formula [16]: Fp = \u03c3p Fo \u2212\u00b5o \u03c3o + \u00b5p. (4) Within each episode, the support and query features share the same perturbation factors. The above equations can be further simplified to the following expression: Fp = (1 + \u03b2)Fo + (\u03b1 \u2212\u03b2)\u00b5o. (5) We call this feature channel statistic perturbation method as local domain perturbation, as it is enabled on individual images with probability Plocal. 3.2. Global Domain Perturbation We need to bound the local domain perturbation to prevent potential training collapse caused by the aggressive perturbation noises. However, insufficient domain perturbation may lead the domain-rectifying adapter to underperform when encountering new domain styles. The local domain perturbation method is trapped in the stability and performance dilemma. We thus propose a novel global domain perturbation by leveraging the global style statistics of the entire dataset to facilitate the domain style synthesis. The dataset\u2019s global style statistics exhibit better perturbation stability when leveraging aggressive perturbation noises to synthesize meaningful target domain styles for sufficient style diversity. We first compute the feature channel statistics \u00b5o for individual images and then progressively update the global style statistics through momentum updating: \u00b5datum = \u03bb\u00b5datum + (1 \u2212\u03bb)\u00b5o, (6) where \u03bb is the momentum updating factor. Then we can perform the global domain perturbation by replacing the image feature channel statistics \u00b5o in equation 5 with the global style statistics \u00b5datum. This global domain perturbation is randomly enabled with probability Pglobal. 3.3. Domain Rectification Module The domain rectification module leverages an domainrectifying adapter to rectify the target domain feature channel statistics to the source domain. The adapter takes as input the perturbed features and predicts two rectification vectors {\u03b1rect, \u03b2rect} to rectify the feature channel statistics of the perturbed feature map Fp as the rectified feature channel statistics {\u00b5rect, \u03c3rect}. \u00b5rect = (1 + \u03b1rect)\u00b5p, \u03c3rect = (1 + \u03b2rect)\u03c3p. (7) \fSimulated domains P P R R ' rect F o F rect F Lcyc Lalign Source domains Figure 4. The process of cycle alignment, where \u2019P\u2019 denotes perturbation and \u2019R\u2019 stands for rectification. Then we leverage the AdaIN function to generate the rectified feature map Frect based on the perturbed feature map Fp and the rectified feature channel statistics {\u00b5rect, \u03c3rect}: Frect = (1 + \u03b2rect) \u03c3p Fp \u2212\u00b5p \u03c3p + (1 + \u03b1rect) \u00b5p, (8) which can be further simplified as: Frect = (1 + \u03b2rect) Fp + (\u03b1rect \u2212\u03b2rect) \u00b5p. (9) We expect the adapter can adaptively predict the rectification factors {\u03b1rect, \u03b2rect} to rectify the perturbed features corresponding to diverse potential target domains. Consequently, during inference, we can leverage the adapter to rectify the target domain features to the source domain, and the rectified features can fittingly benefit from the welltrained source domain model for satisfactory few-shot segmentation results. 3.4. Cyclic Domain Alignment Our goal is enabling the adapter to rectify the perturbed features back to the source domain space. Insufficient supervision during this process may lead the adapter to rectify the features into an unknown space. Therefore, in addition to utilizing the standard Binary Cross-Entropy (BCE) loss for supervision, we propose incorporation of a cyclic alignment loss to constrain the adapter. After obtaining the rectified feature Frect, we further perturb the Frect with the same noise \u03b1 and \u03b2 to get a new perturbed feature F p rect. This perturbed image F p rect is then input into the adapter for a reverse rectification, resulting in F \u2032 rect. If the adapter can map features back to the source domain space, the style of F \u2032 rect should closely match that of Fo. The cycle process is shown in figure 4. Consequently, we align the statistics between original feature and the cyclically rectified feature: Lcyc = 1 C X c (|\u00b5 (Fo) \u2212\u00b5 (F \u2032 rect)| + |\u03c3 (Fo) \u2212\u03c3 (F \u2032 rect)|) . (10) We add constraint to the statistics between Fo and Frect: Lalign = 1 C X c (|\u00b5 (Fo) \u2212\u00b5 (Frect)| + |\u03c3 (Fo) \u2212\u03c3 (Frect)|) . (11) We optimize the model with the final loss L: L = LBCE + Lcyc + Lalign (12) 4. Experiments 4.1. Datasets Following [20], we validate our methods on the crossdomain few-shot segmentation (CD-FSS) benchmark. This benchmark includes images and pixel-level annotations from the FSS-1000 [22], DeepGlobe [7], ISIC2018 [6, 42], and Chest X-ray datasets [3, 18]. These datasets range from natural to medical images, providing sufficient domain diversity. We train models on the natural image dataset PASCAL VOC 2012 [10] with SBD [15] augmentation and evaluate models on the CD-FSS benchmark. FSS-1000 [22] is a dataset designed for few-shot segmentation, containing 1,000 different categories of natural objects and scenes, with each category comprising 10 annotated images. We evaluate models on the official test set with 2,400 images. Deepglobe [7] is a complex Geographic Information System (GIS) dataset, containing satellite images with categories of urban, agriculture, rangeland, forest, water, barren, and unknown. For testing, we follow the processing in [20] to divide each image into six patches and filtering out the \u2019unknown\u2019 category, and evaluate models on the resulting 5,666 test images and their corresponding masks. ISIC2018 [6, 42] is used for skin lesion analysis, containing numerous skin images with associated segmentation labels. We evaluate models on the official training set following the common practice [6], using a uniform resolution of 512\u00d7512 pixels, comprising a total of 2,596 test images. Chest X-ray [3, 18] is an X-ray image dataset for tuberculosis detection, containing X-ray images of Tuberculosis cases as well as images from normal cases. We downsample the original image resolution to 1024\u00d71024 pixels for testing. 4.2. Implementation Details We utilize the train set of PASCAL VOC dataset as the source domain training set. During training, we employ SSP [12] with the ResNet-50 backbone as the baseline model. We first train the baseline model on the whole training set, and then train our method with additional 5 epochs using a batch size of 8. We use SGD to optimize our model, with a 0.9 momentum and an initial learning rate of 1e-3. To reduce memory consumption and accelerate the training \fTable 1. Mean-IoU of 1-way 1-shot and 5-shot results of tradictional few-shot approaches and cross-domain few-shot method on the four CD-FSS benchmark.Bold denotes the best performance among all methods. Methods Deepglobe ISIC Chest X-ray FSS-1000 Average 1-shot 5-shot 1-shot 5-shot 1-shot 5-shot 1-shot 5-shot 1-shot 5-shot Few-shot Segmentation Methods PGNet [52] PANet [46] CaNet [53] RPMMs [48] PFENet [41] RePRI [2] HSNet [32] SSP [12] 10.73 36.55 22.32 12.99 16.88 25.03 29.65 40.48 12.36 45.43 23.07 13.47 18.01 27.41 35.08 49.66 21.86 25.29 25.16 18.02 23.50 23.27 31.20 35.09 21.25 33.99 28.22 20.04 23.83 26.23 35.10 44.96 33.95 57.75 28.35 30.11 27.22 65.08 51.88 74.23 27.96 69.31 28.62 30.82 27.57 65.48 54.36 80.51 62.42 69.15 70.67 65.12 70.87 70.96 77.53 79.03 62.74 71.68 72.03 67.06 70.52 74.23 80.99 80.56 32.24 47.19 36.63 31.56 34.62 46.09 47.57 57.20 31.08 55.10 37.99 32.85 34.98 48.34 51.38 63.92 Cross-domain Few-shot Segmentation Methods PATNet [20] Ours 37.89 41.29 42.97 50.12 41.16 40.77 53.58 48.87 66.61 82.35 70.20 82.31 78.59 79.05 81.23 80.40 56.06 60.86 61.99 65.42 process, we resize both query and support images to 400 \u00d7 400. We apply our two domain perturbation modules into the first three layers of ResNet. For local perturbations, we use the Gaussian noise with a mean of zero and a standard deviation of 0.75, while for global perturbations, we used the Gaussian noise with a mean of zero and a standard deviation of one. All models are evaluated using the mean Intersection Over Union (mIOU). 4.3. Comparison Experiments In Table 1, we present a comparison between our method and other approaches, including traditional few-shot segmentation methods and existing cross-domain few-shot segmentation methods. Traditional few-shot segmentation methods usually underperform in cross-domain scenarios due to the large domain gap between the train and test data. While our approach effectively reduces the domain gap and improves the segmentation performance. This performance improvement is particularly notable in Chest Xrays, where our 1-shot and 5-shot performance surpasses the PATNet [20] by 15.74% and 12.11%, respectively. In Deepglobe, the improvement is 3.4%(1-shot) and 7.15%(5shot). For FSS-1000, our model achieves comparable performance to PATNet, because the domain gap is small. We also follow the same setting of RD [47] to train our model on VOC and evaluate models on SUIM. Table 2 shows our method performs much better than RD. We present some qualitative results of our proposed model for 1-way 1-shot segmentation in Fig. 5. These results indicate that our method improves the generalization ability of traditional few-shot models, attributing to its capability of aligning various domains to the source domain. Table 2. Mean-IoU of 1-way 1-shot results of our method following the same setting of RD[47]. split-0 split-1 split-2 split-3 Average RD[47] 35.20 33.40 34.30 36.00 34.70 Ours 40.60 38.18 41.53 40.72 40.25 Support Query Baseline Ours Figure 5. Qualitative results of our model and baseline in 1-way 1-shot setting on challenging scenarios with large domain gap. 4.4. More Analysis We conduct extensive ablation experiments to demonstrate and analyze the effectiveness of our approach. 4.4.1 Ablation Studies We conduct comprehensive ablation experiments to evaluate the effectiveness of our proposed components. \f0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 Noise Variance 54 55 56 57 58 59 60 mIoU The impact of noise variance on local vs. global perturbations. local perturbation global perturbation Figure 6. We demonstrate the trend of global and local perturbations under different Gaussian noise variances. Table 3. The effects of each module within the baseline, namely the Perturbation module, Rectification module, and Cyclic Alignment Loss, are demonstrated. Perturbation Rectification Cyclic Alignment mean-IoU 57.20 \u2713 58.45 \u2713 57.80 \u2713 \u2713 59.17 \u2713 \u2713 \u2713 60.86 Table 4. Results of using our Cyclic Alignment Loss. BCE loss + cyclic loss + align loss + cyclic & align loss 57.65 58.56 59.12 60.86 Impact of Noise Variance on Perturbations. Figure 6 illustrates the effects of Gaussian noise with varying variances in local and global perturbations. Local perturbations suffer from performance degradation with slightly higher noise levels, whereas global perturbations withstand larger noise levels with minimal performance impact, suggesting greater stability. Thus, in our method, we set Gaussian noise variances at 0.75 for local and 1 for global perturbations to broaden the simulated domain range and improve the domain generalizability. Impact of Each Component. Table 3 illustrates the effectiveness of each module in the model. Integration of all modules results in 3.66% performance improvement compared to the SSP [12] baseline. Importantly, the feature perturbation and rectification processes complement each other: perturbation simulates features across domains, and rectification aligns these features back to the source domain space. Solely using feature perturbation degrades the model to the domain generalization approach similar to NP [13]. Additionally, our cyclic alignment loss is indispensable as it ensures the unification of images from various domains to the source domain. Points of same color refer to the same sample Average distance between original and perturbed features 1.91 Average distance between original and perturbed features 1.24 Points of same color refer to the same sample Average distance between original and perturbed features 0.58 Average distance between original and perturbed features 0.31 Figure 7. Visual analysis (t-SNE) of channel-wise means(top) and variations(bottom). Table 5. Results of feature perturbation methods. local style global style Both perturbation mean-IoU 59.81 59.17 60.86 Table 4 shows the ablation analysis on Cyclic Alignment Loss. Both the alignment loss and cyclic loss can improve the performance. Table 5 compares the impact of local and global styles in feature perturbation, showing that their combination improves model performance attributing to a wider range of domain simulation. Using the channel-wise means and variances as features, the t-SNE(Figure 7) shows that the perturbed features are rectified to be closer to the original features, demonstrating our model\u2019s effectiveness. Impact of Noise Types. We choose the popular Gaussian distribution to generate random noises, which has been widely used by other works (e.g., Mixstyle, DSU and NP). Perturbing feature statistics with random noises can effectively synthesize diverse domain styles, while the noise type is not essential. Table 6 shows that our method is insensitive to the noise types, performing well with Beta, and Uniform noises. Note that our novel Local-Global Domain Perturbation and Cyclic Domain Alignment can largely improve the domain style synthesis diversity for all kinds of noise. More adapters. Table 7 shows that applying multiple adapters can further improve performance. \fTable 6. Results of using different types of noise. Noise Type Gaussian (1,0.75) Beta (3,4) Uniform (-1,1) mIoU 60.86 60.78 60.00 Table 7. Results of using one/two adapters within a single stage. FSS Chest Deepglobe ISIC Average One adapter 79.05 82.35 41.29 40.77 60.86 Two adapters 79.25 83.04 41.74 41.63 61.41 Table 8. Comparison to domain adaption and domain generalization approaches under 1-shot setting. We use same baseline with different methods to ensure fair comparison. Method FSS Chest Deepglobe ISIC Average Baseline(SSP [12]) 79.03 74.23 40.48 35.09 57.20 AdaIN [16] 78.89 74.23 41.85 34.36 57.33 Mixstyle [59] 79.24 76.63 41.05 35.98 58.21 DSU [23] 78.99 77.83 41.19 36.64 58.66 NP [13] 78.98 76.44 41.83 37.87 58.78 Ours 79.05 82.35 41.29 40.77 60.86 4.4.2 Comparion with Domain Transfer Methods We compare our method against traditional domain adaptation (DA) and domain generalization (DG) approaches to validate our method\u2019s effectiveness. For a fair comparison, all categories in the PASCAL VOC were used for training in both DA and DG methods. We evaluate models in the 1-shot setting on the CD-FSS benchmark. Domain Adaptation. We adopt the classical AdaIN [16] method to train four models for the four test datasets. During training, we randomly sample images from the test dataset and extract their feature channel statistics in the lowlevel feature map. And then the AdaIN is applied to replace the feature channel statistics of the train image with the extracted statistics from the test dataset. Domain Generalization. We employ the Mixstyle [59], DSU [23] and NP [13] methods for comparison. These approaches also involves perturbing feature statistics, but they only perform local perturbations and lack a feature rectification process. Table 8 shows that our method performs much better than DA and DG methods in cross-domain few-shot segmentation. 4.4.3 Applying SAM in CD-FSS The recent released large-scale SAM [19] model has greatly advanced image segmentation, demonstrating remarkable zero-shot segmentation capabilities. However, SAM cannot be directly applied to cross-domain few-shot segmentation. Thus we evaluate PerSAM [56] to compare our method to the SAM-based method in cross-domain few-shot segmenTable 9. The result of directly applying PerSAM to cross-domain few-shot segmentation. FSS Chest Deepglobe ISIC Average PerSAM [56] 79.65 31.12 33.39 21.27 41.35 Ours 79.05 82.35 41.29 40.77 60.86 Table 10. Applying our method to transformers can further enhance the model\u2019s performance in cross-domain tasks. FSS Chest Deepglobe ISIC Average FPTrans [55] 78.92 80.49 39.21 47.79 61.60 FPTrans + ours 78.63 82.74 40.32 49.43 62.78 tation. PerSAM is a training-free method. It adapts SAM into the one-shot setting by using support images as the prompt input to segment target objects in query images. Table 9 shows that our method performs much better than PerSAM in cross-domain few-shot segmentation. 4.4.4 Extension to Transformer In Table 10, we show the results of applying our method within FPTrans[55], which leverages support sample prototypes as prompts and Vision Transformer (ViT) as the backbone. Applying our method to the lower-level blocks of ViT improves performance in cross-domain datasets. 5. Conclusion In this paper, we propose a method to effectively bridge the domain gap between different datasets by aligning the target domain with the source domain space. During training, we train a unified adapter by using simulated perturbed features. In the inference stage, we consider target domain images as a form of perturbed images for the direct rectification. Furthermore,we introduce both local and global perturbations to ensure significant style changes, not only based on individual sample but also on the overall style of the dataset. We utilize a cyclic alignment loss to ensure the alignment between the source and target domains for model optimization. We conduct extensive experiments to validate the effectiveness of the proposed framework on various cross-domain segmentation tasks and achieve state-ofthe-art (SOTA) results on multiple benchmarks. Acknowledgement. This work was supported in part by the National Natural Science Foundation of China (U2013210, 62372133), Guangdong Basic and Applied Basic Research Foundation under Grant (Grant No. 2022A1515010306, 2024A1515011706), Shenzhen Fundamental Research Program (Grant No. JCYJ20220818102415032), and Shenzhen Key Technical Project (Grant NO. KJZD20230923115117033)." + } + ] +} \ No newline at end of file