AcademicEval / abs_28K /test_abstract_long_2404.16301v1.json
jiyuuuu's picture
syn
9059969
raw
history blame
200 kB
{
"url": "http://arxiv.org/abs/2404.16301v1",
"title": "Style Adaptation for Domain-adaptive Semantic Segmentation",
"abstract": "Unsupervised Domain Adaptation (UDA) refers to the method that utilizes\nannotated source domain data and unlabeled target domain data to train a model\ncapable of generalizing to the target domain data. Domain discrepancy leads to\na significant decrease in the performance of general network models trained on\nthe source domain data when applied to the target domain. We introduce a\nstraightforward approach to mitigate the domain discrepancy, which necessitates\nno additional parameter calculations and seamlessly integrates with\nself-training-based UDA methods. Through the transfer of the target domain\nstyle to the source domain in the latent feature space, the model is trained to\nprioritize the target domain style during the decision-making process. We\ntackle the problem at both the image-level and shallow feature map level by\ntransferring the style information from the target domain to the source domain\ndata. As a result, we obtain a model that exhibits superior performance on the\ntarget domain. Our method yields remarkable enhancements in the\nstate-of-the-art performance for synthetic-to-real UDA tasks. For example, our\nproposed method attains a noteworthy UDA performance of 76.93 mIoU on the\nGTA->Cityscapes dataset, representing a notable improvement of +1.03 percentage\npoints over the previous state-of-the-art results.",
"authors": "Ting Li, Jianshu Chao, Deyu An",
"published": "2024-04-25",
"updated": "2024-04-25",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"label": "Original Paper",
"paper_cat": "Semantic AND Segmentation AND Image",
"gt": "Unsupervised Domain Adaptation (UDA) refers to the method that utilizes\nannotated source domain data and unlabeled target domain data to train a model\ncapable of generalizing to the target domain data. Domain discrepancy leads to\na significant decrease in the performance of general network models trained on\nthe source domain data when applied to the target domain. We introduce a\nstraightforward approach to mitigate the domain discrepancy, which necessitates\nno additional parameter calculations and seamlessly integrates with\nself-training-based UDA methods. Through the transfer of the target domain\nstyle to the source domain in the latent feature space, the model is trained to\nprioritize the target domain style during the decision-making process. We\ntackle the problem at both the image-level and shallow feature map level by\ntransferring the style information from the target domain to the source domain\ndata. As a result, we obtain a model that exhibits superior performance on the\ntarget domain. Our method yields remarkable enhancements in the\nstate-of-the-art performance for synthetic-to-real UDA tasks. For example, our\nproposed method attains a noteworthy UDA performance of 76.93 mIoU on the\nGTA->Cityscapes dataset, representing a notable improvement of +1.03 percentage\npoints over the previous state-of-the-art results.",
"main_content": "INTRODUCTION Neural Networks [1] and Transformers [2] have achieved great success in semantic segmentation tasks, but supervised tasks typically require a large amount of annotated data. Pixel-level annotation is needed, with at least an hour for each image [3], which significantly increases the cost. One approach to address this problem is to utilize existing annotated data or easily obtainable synthetic data to train models and test them on target data. However, due to domain differences, the model\u2019s performance metrics often decline substantially when tested on target data. In order to obtain a more robust model, researchers have proposed UDA methods [4][5][6], transferring knowledge from annotated source domain data to unannotated target data. It has been proven that CNNs are sensitive to distribution shifts [7] in image classification. Recent studies [8] have shown that Transformers are more robust compared to these factors. In addition, CNNs mainly focus on texture [9], while Transformers emphasize shape, which is more similar to human vision. Some researches have revealed significant differences between the induction bias of standard CNNs and human vision: humans primarily rely on object content (i.e., shape) for recognition [10], while CNNs exhibit a strong preference for style (i.e., texture) [9]. This explains why CNNs are more susceptible to changes when switching between domains, as image style is more likely to vary across different domains. Early studies [11][12][13] have confirmed that feature distribution shifts caused by style differences mainly occur in the shallow layers of the network. This implies that the shallow layers\u2019 feature distribution in the network can reflect the style information of the input images. Therefore, following these works\u2019 methods, we manipulate the style features of the feature maps in the shallow layers of the network. The feature extractor captures the style features of the target domain while preserving the content of the source domain. This approach weakens the style features of the source domain while enhancing the style features of the target domain, achieving style feature transfer. 2. METHOD 2.1. Image to Image Domain Adaptation In UDA, we are given a source dataset as Ds = {(xs i, ys i )}Ns i=1 (1) where Ns is the number of the color images in the dataset, and ys \u2208RH\u00d7W represents the associated semantic map of xs \u2208RH\u00d7W \u00d73. Similarly, Dt = {xt i}Nt i=1 (2) arXiv:2404.16301v1 [cs.CV] 25 Apr 2024 \fis the target dataset where true semantic labels are missing. Typically, segmentation networks trained on Ds exhibit performance degradation when tested on Dt. Here, we use Fourier Domain Adaptation (FDA) [14] and RGB adaptation to reduce the domain gap between the two datasets at the image-level. FDA aims to minimize domain differences by replacing the low-frequency components in the target domain with those from the source domain. This is because low-frequency components can be inferred as the domain style. FDA has achieved significant improvements in semantic segmentation. Therefore, we employ the FDA method for data augmentation, as expressed by the formula: xs\u2192t = F\u22121([\u03b2\u25e6FA(xt)+(1\u2212\u03b2)\u25e6FA(xs), FP (xs)]) (3) The variables FA and FP denote the amplitude and phase components of the Fourier transform, respectively. In the inverse Fourier transform, the phase and amplitude components are remapped to the image space. The hyperparameter \u03b2 determines the filter\u2019s size in the inverse Fourier transform. Random RGB shift is a prevalent and widely adopted technique for data augmentation. Through our experimental observations, we fortuitously discovered that employing random RGB shift as a data augmentation technique significantly enhances the model\u2019s performance. Our hypothesis is that the image-level implementation of random RGB shift enables a closer resemblance between the style of the source and target domains, thereby mitigating the domain gap. Building upon the concept of random RGB shift, we introduce a RGB adaptation method as a solution for domain adaptation. The mean value of each channel is calculated for RGB images x as follows: \u00b5(x) = 1 HW H X h=1 W X w=1 xhw (4) xs\u2192t = xs + (\u00b5(xt) \u2212\u00b5(xs)) (5) The variables \u00b5(s) and \u00b5(t) represent the mean values of the source domain image and the target domain image, respectively, along the channel dimension. By employing this method, the content of the source domain image remains unaltered, thus preserving the availability of accurate labels. Additionally, it facilitates the closer alignment of the source domain image with the target domain image within the RGB space. 2.2. Style Adaptive Instance Normalization In UDA methods, the primary factor causing domain shift is the disparity in styles across domains. The presence of domain shift constrains the models\u2019 capacity for generalization in both domain adaptation and domain generalization tasks. Previous studies have demonstrated that shallow features extracted by backbone networks possess the capability to capture style information in images. Established approaches typically characterize the style features of an image by computing the mean and standard deviation along the channel dimension of shallow features. \u03c3(x) = v u u t 1 HW H X h=1 W X w=1 (xhw \u2212\u00b5(x))2 + \u03f5 (6) Conventional instance normalization can eliminate specific stylistic information from an image. Directly applying this method to UDA can diminish the network\u2019s capacity to learn the style information of the source domain images. However, it also disregards the style information of the target domain, resulting in diminished performance and limited generalization ability on the target domain. To decrease the network\u2019s ability to learn style information from the source domain images while enhancing the style information of the target domain images, we apply AdaIN [12] to replace the style information of the source domain images with that of the target domain images. Meanwhile, this method retains the content information of the source domain images. We term the proposed approach as Style Adaptive Instance Normalization (SAIN). The specific implementation formula is as follows: SAIN(xs, xt) = \u03c3(xt) \u0012xs \u2212\u00b5(xs) \u03c3(xs) \u0013 + \u00b5(xt) (7) \u00b5 and \u03c3 represent the mean and standard deviation of the feature map in the channel dimension, respectively. By transferring the style of the target domain to the source domain during the training process, the network g\u03b8 biased towards content no longer relies on the style of the source domain to make decisions but focuses more on content while also paying attention to the style of the target domain. During testing, we directly use network g\u03b8 without SAIN to ensure the independence of predictions and reduce computational burden. Therefore, we replace the original loss function with a content-biased loss, shown as follows: LS i = \u2212 H\u00d7W X j=1 C X c=1 yS (i,j) log SAIN \u0010 g\u03b8(xS i )(j,c), g\u03b8(xT i )(j,c)\u0011 (8) Furthermore, we follow the consistency training in DAFormer, which involves training the teacher network on augmented target data using DACS [15], while the teacher model generates pseudo-labels using non-augmented target images. 3. EXPERIMENTS 3.1. Implementation Details The proposed method is applied to two challenging unsupervised domain adaptation tasks, where there are abundant se\fmantic segmentation labels in the synthetic domain (source domain), but not in the real domain (target domain). The two synthetic datasets used are GTA5 [16] and SYNTHIA [17], while the real domain dataset is CityScapes [3]. The proposed method is validated based on the DAFormer network and the Mix Transformer-B5 encoder [18]. All backbone networks are pretrained on ImageNet. In the default UDA setting, the MIC [6] masked image self-training strategy and the training parameters are used, including the AdamW optimizer, the encoder learning rate of 6 \u00d7 10\u22125, the decoder learning rate of 6 \u00d7 10\u22124, 60k training iterations, a batch size of 2, linear learning rate warm-up, and DACS [15] data augmentation. 3.2. Evaluation First, we integrate RGB adaptation with several significant UDA methods, including DAFormer [4], HRDA [5] and MIC [6], using the DAFormer framework. Table 1 demonstrates that RGB adaptation achieves notable improvement compared to previous UDA methods without RGB adaptation. Karras et al. [19] demonstrated that styles at different levels encode distinct visual attributes. Styles from fine-grained spatial resolution (lower levels in our network) encode lowlevel attributes like color and fine textures, whereas styles from coarse-grained spatial resolution (higher levels in our network) encode high-level attributes including global structure and textures. Therefore, the application of our SAIN module at the appropriate level is necessary to mitigate adverse style-induced biases. The networks from Block 1 to Block 4 become increasingly deeper. Figure 1 illustrates that the most notable improvement is achieved when applying SAIN in Block 3. However, applying SAIN to features at excessively low levels only has a limited impact on reducing feature biases. Additionally, using SAIN in excessively high-level styles may result in the loss of essential semantic information. Through our experimental findings, we discovered that the concurrent application of SAIN to both Block 2 and Block 3 results in optimal performance. Visual comparisons are conducted with the second performer (i.e., MIC), which utilizes the same segmentation network backbone as ours. Figure 2 illustrates that our model\u2019s prediction results demonstrate higher accuracy. Additionally, our approach demonstrates strong performance on some common categories, including the first row with the terrain, wall in the second row and building in the third and truck in fourth rows. We attribute this phenomenon to the transferability of RGB adaptation and SAIN, which enables the model to learn more style information from the target domain. 3.3. Influence of Style on UDA In the following, we analyze the underlying principles of our method on GTA\u2192Cityscapes. Firstly, we analyze the impact Table 1. Performance (IoU) of RGB adaptation with different UDA methods on GTA\u2192Cityscapes. Network UDA Method w/o RGB Adapt. w/ RGB Adapt. DAFormer DAFormer 68.3 69.37 DAFormer HRDA 73.8 74.45 DAFormer MIC 75.9 76.64 Fig. 1. The effect of SAIN on different blocks. of SAIN on UDA at various feature levels. As shown in Figure 1, as the network depth increases from Block 1 to Block 3, the improvement in the performance of UDA using SAIN also increases accordingly. The results in Table 2 and Table 3 demonstrate significant performance improvements across all benchmarks. In particular, our method has led to a +1.03 increase in mIoU for GTA\u2192CS and a +1.05 increase for Synthia\u2192CS. For most categories, such as building, fence, rider, truck, and train, there is a certain performance improvement. However, there are also some categories that have a slight performance decrease after using SAIN, such as bike. This may be due to the difference in annotation strategies for the bike category between the Cityscapes dataset and the GTA dataset. 4.",
"additional_info": [
{
"url": "http://arxiv.org/abs/2404.09406v2",
"title": "Human-in-the-Loop Segmentation of Multi-species Coral Imagery",
"abstract": "Broad-scale marine surveys performed by underwater vehicles significantly\nincrease the availability of coral reef imagery, however it is costly and\ntime-consuming for domain experts to label images. Point label propagation is\nan approach used to leverage existing image data labeled with sparse point\nlabels. The resulting augmented ground truth generated is then used to train a\nsemantic segmentation model. Here, we first demonstrate that recent advances in\nfoundation models enable generation of multi-species coral augmented ground\ntruth masks using denoised DINOv2 features and K-Nearest Neighbors (KNN),\nwithout the need for any pre-training or custom-designed algorithms. For\nextremely sparsely labeled images, we propose a labeling regime based on\nhuman-in-the-loop principles, resulting in significant improvement in\nannotation efficiency: If only 5 point labels per image are available, our\nproposed human-in-the-loop approach improves on the state-of-the-art by 17.3%\nfor pixel accuracy and 22.6% for mIoU; and by 10.6% and 19.1% when 10 point\nlabels per image are available. Even if the human-in-the-loop labeling regime\nis not used, the denoised DINOv2 features with a KNN outperforms the prior\nstate-of-the-art by 3.5% for pixel accuracy and 5.7% for mIoU (5 grid points).\nWe also provide a detailed analysis of how point labeling style and the\nquantity of points per image affects the point label propagation quality and\nprovide general recommendations on maximizing point label efficiency.",
"authors": "Scarlett Raine, Ross Marchant, Brano Kusy, Frederic Maire, Niko Suenderhauf, Tobias Fischer",
"published": "2024-04-15",
"updated": "2024-04-16",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.HC",
"cs.LG",
"cs.RO"
],
"label": "Original Paper",
"paper_cat": "Semantic AND Segmentation AND Image",
"gt": "Broad-scale marine surveys performed by underwater vehicles significantly\nincrease the availability of coral reef imagery, however it is costly and\ntime-consuming for domain experts to label images. Point label propagation is\nan approach used to leverage existing image data labeled with sparse point\nlabels. The resulting augmented ground truth generated is then used to train a\nsemantic segmentation model. Here, we first demonstrate that recent advances in\nfoundation models enable generation of multi-species coral augmented ground\ntruth masks using denoised DINOv2 features and K-Nearest Neighbors (KNN),\nwithout the need for any pre-training or custom-designed algorithms. For\nextremely sparsely labeled images, we propose a labeling regime based on\nhuman-in-the-loop principles, resulting in significant improvement in\nannotation efficiency: If only 5 point labels per image are available, our\nproposed human-in-the-loop approach improves on the state-of-the-art by 17.3%\nfor pixel accuracy and 22.6% for mIoU; and by 10.6% and 19.1% when 10 point\nlabels per image are available. Even if the human-in-the-loop labeling regime\nis not used, the denoised DINOv2 features with a KNN outperforms the prior\nstate-of-the-art by 3.5% for pixel accuracy and 5.7% for mIoU (5 grid points).\nWe also provide a detailed analysis of how point labeling style and the\nquantity of points per image affects the point label propagation quality and\nprovide general recommendations on maximizing point label efficiency.",
"main_content": "Introduction Effective and informed management of marine ecosystems requires data at a range of spatio-temporal scales [8]. Marine surveys are being increasingly performed using autonomous underwater and surface vehicles [9, 16]. However, these approaches generate large quantities of images Prior Approaches Our Approach Step 1 Step 2 Per-pixel deep features Generate superpixels with point label aware loss Augmented ground truth mask Image with sparse point labels Train feature extractor on densely labeled coral images Image Ground truth mask Multi-level masks combined using join [1,2] or mode [31] Generate superpixels at many different spatial \u2018levels\u2019 MULTI-LEVEL SUPERPIXELS [1,2,31] POINT LABEL AWARE SUPERPIXELS [32] Augmented ground truth mask Image with sparse point labels KNN Denoised DINOv2 General feature extractor Robust, clean perpixel feature vectors Simple clustering Figure 1. Our point label propagation approach leverages the DINOv2 foundation model without any fine-tuning to generate augmented ground truth masks for complex underwater imagery. Top: Prior approaches relied on layering superpixels containing point labels (left), or pre-training a feature extractor on labeled coral imagery, and then clustering pixels using a custom-designed superpixel algorithm (right). Bottom: Our method uses KNN to cluster deep features from the denoised DINOv2 foundation model. of the seafloor which must first be analyzed to obtain usable outputs such as coverage estimations of different substrates and coral species [33, 37]. Coral images are often highly complex, with indistinct boundaries, high variation in color and texture among coral species and poor clarity [20, 36]. The intricate image characteristics and the difficulty in accurately identifying coral species requires domain experts to annotate underwater imagery, preventing the use of common computer vision tools such as the crowd sourcing annotation platform Amazon Turk. Traditionally, marine scientists annotate underwater images using a method called Coral Point Count [24], where randomly or grid-spaced sparse pixels are labeled. These 1 arXiv:2404.09406v2 [cs.CV] 16 Apr 2024 \fpixels are called point labels [32]. Although there is a large quantity of historic data available in both grid and random formats [4, 13], the optimal style of point placement for the purpose of training deep learning models to perform semantic segmentation has not been explored. In recent years, superpixels based on color information [1, 2, 31] and deep features [32] have been used for propagating point labels into dense, pixel-wise augmented ground truth masks used to train deep neural networks to perform semantic segmentation of unseen coral images. Most recently, Raine et al. [32] introduced a novel point label aware approach to superpixels, which clustered pixels based on deep features. While this method improved on the state-ofthe-art, it required training on coral imagery to provide the deep features for the superpixel method, and suffered from performance degradation when small quantities of point labels are available. In this work, we tackle the regime in which extremely few labels are provided. This setting is critical as marine survey projects often have limited budgets for labeling data [8]. Furthermore, a common use case for survey data processing involves quickly iterating and retraining models during field trips as new species or environmental conditions are encountered [8]. We propose using the general foundation model DINOv2 [28, 38] to provide the per-pixel deep features (Fig. 1). We use the simple K-Nearest Neighbor algorithm to generate the augmented ground truth, and outperform the state-of-the-art for small numbers of point labels. We also demonstrate further performance improvements by using a human-in-the-loop point selection regime in which the knowledge of the human expert is leveraged to reduce uncertainty in the KNN\u2019s feature space. This paper demonstrates the relevance of general foundation models for multi-species segmentation of domainspecific underwater imagery, while improving label efficiency when few points are available (Fig. 1). Our contributions are summarized as follows: 1. We propose to leverage a general purpose foundation model to generate per-pixel deep features for domainspecific coral images and establish that the features are effective without any training or fine-tuning on coral imagery. Combining these features with the simple K-Nearest Neighbors algorithm is sufficient for generating accurate augmented ground truth masks from sparse point labels and removes the need for complex superpixel algorithms. 2. For extremely sparse point labels, i.e. 5-25 points per image, we propose a human-in-the-loop labeling regime, which combines human knowledge with the model\u2019s introspective uncertainty to select informative point label locations. We outperform the previous state-of-the-art by 17.3% pixel accuracy and 22.6% mIoU when there are 5 point labels available per image, and by 10.6% and 19.1% if 10 point labels are available. 3. Even without the human-in-the-loop labeling regime, using DINOv2 denoised features with a KNN improves on the label propagation task for small numbers of point labels per image. On the UCSD Mosaics dataset we see improvements of 3.5% for pixel accuracy and 5.7% for mIoU when 5 points are labeled; and 3.3% in pixel accuracy and 10.2% in mIoU for 10 points. 4. We perform thorough experiments to determine the effect of the number of point labels and the point labeling style on the point propagation task, and provide meaningful recommendations for efficient annotation. We make our code1 and video2 available to foster future research. 2. Related Work Broad-scale marine survey technologies such as autonomous underwater and surface vehicles enable the collection of large quantities of imagery [9, 16, 25, 27]. Automating the analysis of this imagery requires solutions which combine computer vision, deep learning and domainspecific expertise in marine biology [14, 37]. This section discusses approaches for semantic segmentation of underwater imagery and weakly supervised methods for point label propagation, recent advances in foundation models, and human-in-the-loop principles. 2.1. Segmentation of Underwater Imagery Semantic segmentation of underwater imagery is complicated by a range of factors including the visual traits of coral species, which can appear similar between different species, and which are often intricate and highly textured [4, 15]. The image quality and clarity can also be affected by turbidity, scattering and attenuation of sunlight, blur and changes in coloration due to depth [20, 36]. These image characteristics, combined with the lack of semantic \u201cobjectness\u201d of coral instances, makes semantic segmentation of underwater imagery a unique and challenging problem in computer vision. There have been numerous approaches for fully supervised segmentation of corals in underwater imagery [11, 18, 35, 35, 41, 42, 44, 45], where the model is trained on pairs of images and densely labeled, pixel-wise ground truth masks. The TagLab annotation tool [30] makes dense pixel-wise annotation of large orthoimages faster, but relies on a model trained on 15,000 densely labeled coral images. There are fewer approaches for weakly supervised segmentation of corals [1, 2, 31, 32, 39, 40]. These approaches are based on custom-designed superpixel methods which 1https : / / github . com / sgraine / HIL coral segmentation 2https://youtu.be/YBTUCECu3OM 2 \fDenoised DINOv2 KNN Initially: domain expert labels up to 10 points Augmented ground truth mask Location of next point is fed back to domain expert Create distance map Inverted feature similarity map Combine distance and feature similarity Select next point After max points labeled: Input Image Domain Expert Labeler Figure 2. Proposed Algorithm Schematic. Our smart labeling scheme combines domain expert knowledge and our model\u2019s internal uncertainty to optimize point label selection in a human-in-the-loop framework. Our method starts by taking a coral image as input and requesting the domain expert to label up to 10 points centrally in the largest instances. Then a feature similarity map is generated by calculating the cosine similarities between the labeled points and every other pixel. We encourage exploration by incorporating a distance map and then combine both maps to obtain an overall probability mask for pixel selection. The selected pixel is fed back to the domain expert for labeling, and then the KNN is updated. Once the maximum points have been labeled, the augmented ground truth mask is generated and can be later used for training a model to perform semantic segmentation. generate dense ground truth masks from sparse point labels. The multi-level superpixel method [1, 2, 31] generates superpixels from color features at many spatial scales, and then propagates point labels within each segment before joining together the different \u201clevels\u201d of superpixels. The most recent approach, Point Label Aware Superpixels [32] describes a novel superpixel algorithm which uses the point labels directly in generating the superpixel segments. These prior superpixel approaches rely on having sufficient points available, and experience degraded performance in the very sparse setting. There is an opportunity to generalize and simplify these approaches by leveraging the recent advances in general foundation models. 2.2. Foundation Models Recent works developed foundation models for learning robust pre-trained feature representations that are taskagnostic [23, 28, 46]. Foundation models are trained on large-scale datasets and are designed to learn highly generalized representations which allow the model to transfer to tasks and data outside of the training distribution [23]. Some works aim to personalize foundation models for specific visual concepts, e.g. the user\u2019s pet, as in [43], or adapt the model by training a task-specific decoder or adapter [5]. For leaf counting, instance segmentation, and disease classification for plant phenotyping, the adapted general foundation models did not outperform the taskspecific methods [5]. In medical image analysis, [3] demonstrate the cross-task generalizability of DINOv2 and report competitive performance when the features are used with KNN for disease classification. Other works have performed self-supervised object localization without labels [34], however they do not perform segmentation of the entire image. Although some research has investigated the application of the DINOv2 foundation model for specialized problems [3, 5, 17], the performance of DINOv2 for underwater coral segmentation has not been studied. As described in Section 2.1, underwater images have unique visual characteristics, including abstract textures, fractal-like boundaries, and overlapping instances [4, 32]. It is unknown whether foundation models trained on general images are able to produce meaningful feature embeddings for coral imagery. 2.3. Human-in-the-Loop Human-in-the-Loop describes machine learning which involves interaction between humans and the algorithm [26]. Specifically, the Human-in-the-Loop sub-field of Interactive Machine Learning describes a framework in which control is shared between the human and the model: the human supplies information to the model in a focused, frequent and interactive way [19, 26]. While using models to predict labels on new data has been performed in the ecology domain previously [6, 22], the use of foundation models as part of an interactive labeling framework has not been implemented for coral point label propagation. To our knowledge, an approach for multi-species coral 3 \fsegmentation which combines the general knowledge of foundation models with sparse domain-specific labeling has not been proposed in the literature. This is an opportunity to decrease the associated time and cost of manually labeling domain-specific imagery, while improving the accuracy of propagated ground truth masks when few labels are available. 3. Method 3.1. Method Overview Our proposed point label propagation approach leverages the denoised DINOv2 foundation model [38], based on [28]. We generated the augmented ground truth mask by clustering pixels in the deep feature space with K-Nearest Neighbors. Our approach takes a photo-quadrat coral image and a set of sparse point labels as input, and outputs a dense pixelwise augmented ground truth mask. The sparse point labels are either randomly distributed in the image, spaced evenly as a grid3, or selected using our proposed Human-in-theLoop smart labeling regime (Section 3.2). We take our image as input and obtain a set of feature vectors of length 768 from the denoised DINOv2 feature extractor [38]. This feature extractor is based on a transformer architecture [28], which outputs a deep feature for every 14x14 pixel patch in the input image. The feature extractor also outputs a \u2018CLS\u2019 token for the whole image, which we do not use. We spatially upsample the feature vectors with bilinear interpolation, such that we obtain one deep feature vector for each pixel in the input image. The per-pixel feature vectors are then L2 normalized. We take our set of L sparse labeled pixels and store the normalized feature embeddings {v1, ..., vl, ..., vL} for these pixels. We also store X = {(x1, y1), . . . , (xL, yL)} where (xl, yl) are the pixel coordinates of vl. We calculate the cosine similarity between the feature embedding vl for l \u2208{1, . . . , L} and the feature embedding vp for every other pixel p in the image: sim(vp, vl) = vp \u00b7 vl. (1) We determine the augmented ground truth mask by performing K-Nearest Neighbors with k = 1. Note that we trialed different values for k and did not see any improvement for k > 1, as seen in Fig. 5 and discussed further in the Supplementary Material. 3.2. Human-in-the-Loop Pixel Selection To further improve our performance in the extremely sparse case, we propose a novel labeling regime (Fig. 2). In con3In the case that grid-spaced points are used and the number of points cannot be equally distributed into rows and columns, the nearest quantity is used e.g. for 5 point labels, a grid of 2x2 points is used. trast to prior approaches, which have leveraged randomly distributed or grid-spaced sparse point labels, we consider the point labeling problem as a human-in-the-loop task. To this end, we assume that we have a domain expert available to collaboratively label a certain number of points, which are then used in our KNN and DINOv2 point label propagation approach. To select informative points that we want the human to label, we consider the cosine similarity between labeled and unlabeled pixel features in the DINOv2 deep feature space. We start by asking the domain expert to label up to 10 pixels in the middle of the largest instances they can see in the image. For more than 10 point labels, the smart pixel regime iteratively proposes one point at a time for labeling based on which parts of the image have the most uncertainty. This uncertainty is modeled as the cosine similarity to the closest labeled pixel. To do this, we first follow the method described in Section 3.1 to obtain, upsample and normalize the per-pixel feature embeddings and we find a cosine similarity map (Eq. 1) between the starting labeled pixels and every other pixel in the image. We invert this map such that pixel locations which have a low cosine similarity to the closest labeled pixel have a higher probability of selection: C(x, y) = 1 \u2212 max l\u2208{1,...,L} sim(vq, vl), (2) where vq is the feature vector at location (x, y). We encourage exploration across the whole image by creating a probabilistic distance map for the labeled pixels. We first compute the Euclidean distance transform on a binary mask denoting the location of our set of labeled pixels, where initially L = 10: D(x, y) = min (x\u2032,y\u2032)\u2208X p (x \u2212x\u2032)2 + (y \u2212y\u2032)2. (3) We then perform Gaussian smoothing over the distance transform and tune the smoothing parameter \u03c3 in the ablation study in Section 5.2: Dsmooth(x, y) = 1 \u2212exp \u0010 \u2212D(x, y)2 2\u03c32 \u0011 . (4) We combine the probabilistic cosine similarity map with the distance map, and weight the two terms with \u03bb = 2.2 (see hyperparameter tuning in Section 5.2): M(x, y) = Dsmooth(x, y) + \u03bb sim(vx, \u00af vl) \u03bb + 1 . (5) After combining the distance map, we select the next pixel for labeling by taking the location (\u02c6 x, \u02c6 y) = arg max(x,y) M(x, y) corresponding to the highest selection probability in M. 4 \f0 50 100 150 200 250 300 Number of Point Labels 20 40 60 80 Pixel Accuracy 0 50 100 150 200 250 300 Number of Point Labels 20 40 60 80 Mean Intersection Over Union 0 20 40 x 50 60 70 80 y1 0 20 40 x 20 30 40 50 60 70 y1 Fast MSS Random Fast MSS Grid Ensemble Point Label Superpixels Random Ensemble Point Label Superpixels Grid Point Label Superpixels Random Point Label Superpixels Grid DINOv2 + KNN Random DINOv2 + KNN Grid DINOv2 + KNN Smart Figure 3. Point Label Propagation Pixel Accuracy and Mean IoU: our KNN and DINOv2 approach is shown in orange for random and grid labeling, and the red line depicts performance of the KNN and DINOv2 approach with the Human-in-the-Loop collaborative labeling scheme. Our approach significantly outperforms prior works when there are very sparse point labels available, i.e. 5-25 points. When a larger quantity of points are used (300 points), the performance of the different approaches converges. Table 1. Performance of Point Label Propagation Approaches (Refer to Section 4.3 for Metric Definitions), for 5 / 10 / 25 / 300 Point Labels. \u2018F-MSS\u2019 is Fast MSS [31], \u2018PLAS\u2019 is Point Label Aware Superpixels [32], and \u2018D+NN\u2019 is KNN with Denoised DINOv2 [38] (Ours). Label PA mPA mIoU Time per Image (s) Method Style 5 / 10 / 25 / 300 5 / 10 / 25 / 300 5 / 10 / 25 / 300 5 / 10 / 25 / 300 F-MSS Rand. 7.29 / 13.49 / 30.09 / 86.81 6.60 / 12.34 / 29.26 / 82.70 6.55 / 12.11 / 28.53 / 80.12 2.14 / 2.19 / 2.21 / 2.76 F-MSS Grid 6.36 / 13.94 / 39.18 / 89.98 5.95 / 13.01 / 36.72 / 88.17 5.93 / 12.83 / 35.51 / 86.44 2.43 / 2.45 / 2.36 / 2.96 PLAS Single Rand. 48.45 / 55.26 / 65.16 / 86.68 32.03 / 41.44 / 57.65 / 81.74 23.86 / 32.22 / 47.76 / 77.56 1.71 / 2.00 / 2.17 / 1.93 PLAS Single Grid 51.88 / 58.11 / 72.96 / 89.28 38.24 / 46.30 / 64.91 / 86.16 28.93 / 36.94 / 58.00 / 82.73 1.55 / 1.80 / 2.06 / 1.81 PLAS Ens. Rand. 52.73 / 62.00 / 71.11 / 92.47 36.48 / 49.04 / 63.21 / 89.93 25.91 / 35.6 / 50.46 / 85.45 4.27 / 4.55 / 5.02 / 5.35 PLAS Ens. Grid 54.23 / 65.76 / 76.31 / 94.60 40.08 / 53.20 / 69.13 / 92.49 30.00 / 40.34 / 59.82 / 89.38 4.06 / 4.25 / 5.15 / 5.28 D+NN (Ours) Rand. 55.72 / 64.51 / 75.07 / 88.77 39.94 / 50.91 / 65.80 / 83.84 32.09 / 42.79 / 58.04 / 81.75 4.88 / 4.55 / 4.74 / 4.90 D+NN (Ours) Grid 57.73 / 69.07 / 78.74 / 89.86 44.40 / 58.08 / 70.05 / 87.41 35.74 / 50.58 / 64.40 / 85.77 4.78 / 4.70 / 4.79 / 4.69 D+NN (Ours) Smart 71.56 / 76.38 / 81.27 / 89.61 61.46 / 69.87 / 75.91 / 86.45 52.60 / 59.48 / 67.97 / 85.00 4.74 / 4.98 / 20.0 / 273.08 4. Experimental Setup This section describes the implementation details (Section 4.1), evaluation datasets (Section 4.2), and evaluation metrics (Section 4.3). 4.1. Implementation All experiments are conducted with a Quadro RTX 6000, and inference times are with respect to this GPU. Our approach is implemented using Python and PyTorch [29]. We use the Faiss module to enable fast K-Nearest Neighbors on GPU [21]. We use the denoised DINOv2 model and implementation from [38]. 4.2. Datasets The UCSD Mosaics dataset is a multi-species coral dataset labeled with dense ground truth masks [2, 10]. We use the version of the dataset provided by [2], but we notice some ground truth masks are corrupted so we exclude these (we remove 219 from the training split, yielding 3,974 images and 32 from the test split, resulting in 696 images; further details can be found in the Supplementary Material). Each image is 512 by 512 pixels and the dataset contains 33 types of corals and an \u2018unknown\u2019 or \u2018unlabeled\u2019 class. For consistency with [2, 32], we ignore this class during evaluation. To simulate the domain expert in our human-in-theloop smart labeling regime, we take the point label from the ground truth mask at the location proposed, or in the case of the first 10 pixels, we select the middle pixel of the largest instances in the mask. 4.3. Evaluation Metrics We use three frequently used metrics [2, 12, 32] to establish and compare the performance of our approach with prior 5 \fTable 2. Effect of Denoising on Point Propagation (Refer to Section 4.3 for Metric Definitions). \u2018Raw\u2019 refers to the original DINOv2 [28] and \u2018Denoise\u2019 refers to the Denoising ViT implementation [38]. PA mPA mIoU Labels Raw / Denoise Raw / Denoise Raw / Denoise 5 68.58 / 71.57 60.23 / 61.46 50.28 / 52.60 10 73.32 / 76.38 68.04 / 69.87 55.76 / 59.48 25 76.94 / 81.27 70.97 / 75.91 61.61 / 67.97 50 79.71 / 82.80 76.68 / 77.52 68.43 / 72.96 100 82.87 / 85.60 79.45 / 81.82 74.92 / 78.77 200 86.14 / 88.06 84.20 / 84.83 81.44 / 82.75 300 88.10 / 89.61 85.58 / 86.45 83.79 / 85.00 Ground DINOv2 Denoise Input Truth PLAS [32] [28] [38] Figure 4. Comparison of Point Label Aware Superpixels (PLAS) [32] features, DINOv2 features [28], and denoised DINOv2 features [38]. For the transformer approaches, features for every 14x14 patch in the original image have been upsampled with bilinear interpolation. All features are reduced to RGB for visualisation with Principal Components Analysis (PCA). Pixels with similar RGB colors are similar in the deep feature space. The CNN features used by PLAS do not effectively group pixels into meaningful segments. The denoising model clearly reduces the artefacts, resulting in smoother, cleaner features and improved clustering. methods: 1) Pixel Accuracy (PA) is the sum of correctly classified pixels divided by the predicted pixels; 2) The mean pixel accuracy (mPA) is the pixel accuracy averaged over the classes; and 3) The mean intersection over union (mIoU) denotes the average of the per-class IoU scores. A higher score indicates better performance for all metrics. 5. Results We first compare our method to the state-of-the-art for point label propagation in Section 5.1 and then provide ablation studies in Section 5.2. 5.1. Comparison to State-of-the-art Methods We compare the performance of our novel method to stateof-the-art approaches, namely Fast Multi-level Superpixel Segmentation (Fast MSS) [31], a faster implementation of CoralSeg [2], and Point Label Aware Superpixels, for which we compare against both the single method (Single) and ensemble of three Point Label Aware algorithms (Ensemble), as described in [32]. As shown in Table 1, leveraging K-Nearest Neighbors with features extracted by the denoised DINOv2 foundation model [38] for point label propagation outperforms prior approaches for small numbers of point labels (5, 10 and 25 per image). The absolute increase in mIoU is 46.1% and 22.6% when compared to Fast MSS and Point Label Aware Superpixels respectively for five point labels and our human-in-the-loop labeling regime (Fig. 3); and we improve by 64.3% and 17.3% for pixel accuracy (Fig. 3). If the human-in-the-loop labeling regime is not used, we still outperform the state-of-the-art by 3.5% for pixel accuracy and 5.7% for mIoU (for 5 grid points). Even in the setting that we do not target in this paper, i.e. if there are as many as 300 point labels available, our approach exhibits comparable performance to the single classifier methods but is outperformed by the ensemble of three Point Label Aware Superpixel classifiers (Table 1). Our approach, which leverages DINOv2 and our smart labeling regime, exhibits similar computation times per image as the ensemble for the Point Label Aware Superpixel method for small quantities of point labels (Table 1). However, when smart labeling is used for large quantities of points i.e. 300 points, the computation time is prohibitively high as clustering in the deep feature space occurs every iteration to generate the feature similarity map (Eq. 2). For large quantities of points, the grid-spaced version should be used instead. It is important to note that the intended use case for smart labeling is to improve performance in the extremely sparse (5-25 points) setting. Fig. 6 presents qualitative results demonstrating that our method generates a dense augmented ground truth mask which closely matches the ground truth provided, even for very sparsely labeled images. 5.2. Ablation Study 5.2.1 Denoising DINOv2 Features We use the denoised feature extractor for DINOv2 described in [38] and demonstrate the utility of this method by results as seen in Table 2. Fig. 4 compares the original [28] and denoised [38] DINOv2 deep feature embeddings. We provide an additional ablation in the Supplementary Material which also evaluates the performance when DINOv2 is trained with registers [7], both with and without denoising [38], although we did not see an improvement with this approach. Fig. 4 compares the deep features to the ground truth for each of the images, as well as the deep CNN features used in the Point Label Aware Superpixel method [32]. We show that the raw DINOv2 features 6 \f0 3.0 2.4 2.2 2.0 1.8 1.5 80.96 81.26 81.27 81.20 81.20 80.78 \u03bb (a) Ablation for \u03bb 0 100 75 50 25 1 80.43 80.25 81.27 80.44 80.66 \u03c3 (b) Ablation for \u03c3 0 5 3 1 72.26 74.94 81.27 k (c) Ablation for k Figure 5. Pixel accuracy of label propagation for 25 point labels. Our human-in-the-loop smart pixel selection regime is robust to changes in hyperparameter values. (a) If the feature similarity map is weighted more highly, there is a small improvement (we choose \u03bb = 2.2). (b) The pixel accuracy is highest when the distance map Gaussian smoothness is set to \u03c3 = 50. (c) Clustering with KNN results in higher performance when k = 1. exhibit artefacts caused by the position embeddings used during training. These artefacts are unhelpful during clustering because coral instances of the same class can appear in different areas of the same image. The denoising model effectively removes these artefacts, resulting in cleaner features and improved point propagation (Table 2). 5.2.2 Weighting the Probability Maps (\u03bb) We evaluate the impact of weighting term \u03bb which balances the importance of the cosine similarity map (Eq. 2) and the distance map (Eq. 3). As seen in Fig. 5, we find that a value of \u03bb = 2.2 results in the best pixel accuracy, although our approach is not sensitive to the exact value of \u03bb. 5.2.3 Exclusion Distance Our human-in-the-loop labeling regime considers the distance between each pixel selected for labeling by incorporating a Gaussian-smoothed distance mask between all pixels and the previously labeled pixels. The Gaussian smoothing introduces the \u03c3 hyperparameter which controls how closely pixels can be selected to previously labeled pixels. We demonstrate the impact of this hyperparameter on the point label propagation task through the ablation study results in Fig. 5, and find that our approach is robust to different values. 5.3. Effect of Point Label Quantity Greater quantities of point labels resulted in improved performance for the point label propagation task (Fig. 3). Having a sufficient number of points is especially critical for the Fast MSS [31] approach. For grid-spaced point labels, the Fast MSS approach improves from 5.9% to 86.4% mIoU when increasing from 5 to 300 labels, a difference of 80.5%, whereas the equivalent difference in mIoU is 59.4% and 50.0% for the Point Label Aware Superpixels [32] and our DINOv2 and KNN approach, respectively (Table 1). We include the results for all values of point labels (5, 10, 25, 50, 100, 200, 300) in the Supplementary Material. Although the point label propagation improves for all methods as the quantity is increased, there is a decrease in the rate of improvement as the number of labels is increased from 100 to 300 points. When the quantity of grid labels is increased from 100 to 300 points per image, the Fast MSS [31] approach improves by 11.7% for mIoU, as compared to an improvement of 68.9% for mIoU when increasing from 5 to 100 points. For the Point Label Aware Superpixels, the mIoU improves by 49.9% when increasing from 5 to 100 grid points, and by 9.5% when increasing from 100 to 300 points. For our denoised DINOv2 and KNN approach, the mIoU improves by 26.2% when increasing from 5 to 100 smart points, and by 6.2% when increasing from 100 to 300 smart points. 5.4. Effect of Point Label Placement Style All of the approaches evaluated benefit from grid placement of point labels over randomly placed point labels (Fig. 3). The effect is particularly pronounced for the multilevel superpixels (Fast MSS) [31], which exhibits absolute improvements for mIoU when using grid-spaced pixels of 13.3%, 9.9%, 6.8% and 6.3% for label quantities of 50, 100, 200 and 300 points respectively. The Point Label Superpixels also benefit from grid-spaced points, with improvements of 8.4%, 4.8%, 4.4% and 3.9% also for 50, 100, 200 and 300 points respectively. Grid-spaced labels ensure consistent coverage of the whole image and make effective use of every point label. As seen in Fig. 6, randomly placed point labels can often be placed very close together, reducing the information gained. Fig. 6 demonstrates that for very small numbers of point labels, e.g. 5 points per image, a significant benefit is gained from leveraging the knowledge of the domain expert in selecting points in the center of instances for up to 10 points, and then iteratively selecting further pixels with the point propagation model, as described in Section 3. The augmented ground truth masks obtained by our proposed approach (top two rows of Fig. 6) are significantly closer to the ground truth than the prior approaches. We note that our smart label regime could be applied to the other techniques as well, and we will investigate this in future work. When few labels are available, the multi-level superpixel methods [2, 31] suffer as the method relies on layering labeled regions from different scales. The point label superpixel method suffers in the sparse label case as the boundaries of the superpixels are not forced to conform to the instance boundaries by conflicting point labels [32]. Our method performs well in the sparse label cases because pix7 \fFast MSS Fast MSS Point Superpixels Point Superpixels Ours Query Image Ground Truth Random Points Grid Points Random Points Grid Points Smart Points 5 Labels 5 Labels 300 Labels 300 Labels Figure 6. Qualitative comparison between the Fast MSS approach [31], the Point Label Aware Superpixel approach [32] and our approach, based on denoised DINOv2 features [38], K-Nearest Neighbors and our Human-in-the-Loop labeling regime. The top two rows show point propagation for 5 labels, and the bottom two rows demonstrate point propagation when there are 300 labels available. The pixels used in the point label propagation are shown as black circles within the output augmented ground truth masks. Additional qualitative results are in the Supplementary Material. els can be assigned as the correct class even if spatially far away from labeled points, because the clustering occurs only in the deep feature space. 6."
},
{
"url": "http://arxiv.org/abs/2312.08916v2",
"title": "Progressive Feature Self-reinforcement for Weakly Supervised Semantic Segmentation",
"abstract": "Compared to conventional semantic segmentation with pixel-level supervision,\nWeakly Supervised Semantic Segmentation (WSSS) with image-level labels poses\nthe challenge that it always focuses on the most discriminative regions,\nresulting in a disparity between fully supervised conditions. A typical\nmanifestation is the diminished precision on the object boundaries, leading to\na deteriorated accuracy of WSSS. To alleviate this issue, we propose to\nadaptively partition the image content into deterministic regions (e.g.,\nconfident foreground and background) and uncertain regions (e.g., object\nboundaries and misclassified categories) for separate processing. For uncertain\ncues, we employ an activation-based masking strategy and seek to recover the\nlocal information with self-distilled knowledge. We further assume that the\nunmasked confident regions should be robust enough to preserve the global\nsemantics. Building upon this, we introduce a complementary self-enhancement\nmethod that constrains the semantic consistency between these confident regions\nand an augmented image with the same class labels. Extensive experiments\nconducted on PASCAL VOC 2012 and MS COCO 2014 demonstrate that our proposed\nsingle-stage approach for WSSS not only outperforms state-of-the-art benchmarks\nremarkably but also surpasses multi-stage methodologies that trade complexity\nfor accuracy. The code can be found at\n\\url{https://github.com/Jessie459/feature-self-reinforcement}.",
"authors": "Jingxuan He, Lechao Cheng, Chaowei Fang, Zunlei Feng, Tingting Mu, Mingli Song",
"published": "2023-12-14",
"updated": "2023-12-18",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"label": "Original Paper",
"paper_cat": "Semantic AND Segmentation AND Image",
"gt": "Compared to conventional semantic segmentation with pixel-level supervision,\nWeakly Supervised Semantic Segmentation (WSSS) with image-level labels poses\nthe challenge that it always focuses on the most discriminative regions,\nresulting in a disparity between fully supervised conditions. A typical\nmanifestation is the diminished precision on the object boundaries, leading to\na deteriorated accuracy of WSSS. To alleviate this issue, we propose to\nadaptively partition the image content into deterministic regions (e.g.,\nconfident foreground and background) and uncertain regions (e.g., object\nboundaries and misclassified categories) for separate processing. For uncertain\ncues, we employ an activation-based masking strategy and seek to recover the\nlocal information with self-distilled knowledge. We further assume that the\nunmasked confident regions should be robust enough to preserve the global\nsemantics. Building upon this, we introduce a complementary self-enhancement\nmethod that constrains the semantic consistency between these confident regions\nand an augmented image with the same class labels. Extensive experiments\nconducted on PASCAL VOC 2012 and MS COCO 2014 demonstrate that our proposed\nsingle-stage approach for WSSS not only outperforms state-of-the-art benchmarks\nremarkably but also surpasses multi-stage methodologies that trade complexity\nfor accuracy. The code can be found at\n\\url{https://github.com/Jessie459/feature-self-reinforcement}.",
"main_content": "Introduction Weakly supervised semantic segmentation (WSSS) reduces the cost of annotating \u201cstrong\u201d pixel-level labels by using \u201cweak\u201d labels such as bounding boxes (Dai, He, and Sun 2015; Song et al. 2019), scribbles (Lin et al. 2016; Vernaza and Chandraker 2017), points (Bearman et al. 2016; Su et al. 2022) and image-level class labels (Araslanov and Roth 2020; Ru et al. 2022; Wu et al. 2023; Ru et al. 2023). Among these, image-level class labels are the most affordable, but challenging to exploit. A commonly used WSSS approach based on image-level class labels typically includes the following steps: (1) to train a neural network for image classification; (2) to use the network to generate class activation maps (CAMs) (Zhou et al. 2016) as seed regions; (3) to refine the CAMs to pseudo segmentation labels that will *corresponding author. Copyright \u00a9 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Regions with low confidence activation value is moderate \u2022 boundaries between foreground and background \u2022 boundaries within semanticallydifferent objects Regions with high confidence activation value is high \u2022 person \u2022 chair activation value is low \u2022 background Reinforce by recovering local information Reinforce by constraining global information Image & GT uncertain regions confident regions CAM Figure 1: Our main idea. The flawed CAM only identifies discriminative regions. To solve this, we propose to partition the image into uncertain regions (e.g., object boundaries) and confident regions (e.g., the main body of an object) and reinforce features of these regions in a complementary way. be used as the ground truth for supervising a segmentation network. These steps can either be implemented as separate stages or as a single collaborative stage, and single-stage frameworks are usually more efficient as they streamline the training pipeline. In general, high-quality pseudo labels lead to superior semantic segmentation performance. In this work, we focus on developing an effective single-stage approach to generate more accurate pseudo labels from imagelevel class labels. Unfortunately, CAMs are essentially flawed because they are intended for classification, i.e., they strive to identify the most discriminative regions of an object aiming at improved classification accuracy. To tackle this, one can improve the initial seeds (Lee et al. 2019; Wang et al. 2020) or refine pseudo labels (Ahn, Cho, and Kwak 2019; Chen et al. 2020), by expanding activations or labels to semantically consistent pixels in the neighborhood. Recent studies have found that the restricted receptive field of convolution negatively affects the recognition of integral objects (Ru et al. 2022, 2023) and use vision transformer (Dosovitskiy et al. 2020) to model the global relationships for improvement. But this does not resolve the issue of CAM seeds or pseudo labels, and we still observe empirically high uncerarXiv:2312.08916v2 [cs.CV] 18 Dec 2023 \ftainty in (1) boundary regions between foreground objects and background, and (2) misclassified regions within multiple semantically-different objects. In the example of Figure 1, the generated CAM is uncertain about the two arms of the person on the chair, also the boundary between the foreground (person and chair) and the background is unclear. These uncertain regions are easily confused by obscure semantic clues due to the absence of pixel-level supervision. Our goal is to clarify the visual semantics of uncertain regions mentioned above. We emphasize that the local visual patterns should be explicitly modeled and captured. As can be seen from Figure 1, head and upper thighs are well recognized, while the recognition of arms and lower legs needs improvement. A better understanding of that arms and lower legs surely belong to a person should be established using local visual context. Although some methods can deal with noisy object boundaries by employing off-theshelf saliency detection models for rich object contours (Lee et al. 2021b; Li, Fan, and Zhang 2022), they overlook uncertain regions caused by low confidence within objects. Alternatively, it has been proposed to attain the training objective using knowledge gathered from the past training iterations, i.e., self-distillation (Caron et al. 2021). Encouraged by the success of self-distillation, we discard saliency detection models, but take advantage of the strategy of selfdistillation in our model training. To this end, to explore and strengthen semantics over uncertain regions, we propose a progressive self-reinforcement method. To distinguish uncertain regions from confident ones, we define those with intermediate CAM scores as uncertain regions, since a very low/high score strongly indicates the background/foreground. Specifically, we propose to mask uncertain features (equivalent to image patch tokens) and learn to recover the original information with the help of an online momentum teacher. This masking strategy aligns with a state-of-the-art pre-training paradigm called masked image modeling (MIM) that brings locality inductive bias to the model (Xie et al. 2023). We upgrade its random masking strategy with semantic uncertainty so that the network can focus on uncertain features controlled by the masking ratio. This design is beneficial to facilitate features in both object boundaries and misclassified regions. Assuming that confident features should be robust enough to present global semantics, we also introduce a complementary method that constrains semantic consistency between two augmented views with the same class labels. Our proposal can be seamlessly integrated into a vision transformer based single-stage WSSS framework. We summarize our main contributions as follows : \u2022 We propose a novel WSSS approach, progressive feature self-reinforcement, to effectively enhance the semantics of uncertain regions. The investigation of uncertain regions, including both object boundaries and misclassified categories, significantly improves WSSS performance. \u2022 We design an adaptive masking strategy to identify uncertain regions. Unlike most previous works that adopt additional saliency detection models, we locate uncertain regions with the guidance of semantic-aware CAMs. \u2022 Exhaustive experiments on PASCAL VOC 2012 (Everingham et al. 2010) and MS COCO 2014 (Lin et al. 2014) show that our method outperforms SOTA singlestage competitors, even better than existing sophisticated multi-stage methods. Related Work Weakly Supervised Semantic Segmentation Multi-stage WSSS methods adopt a classification model to generate CAMs as pseudo labels, then train a segmentation model for evaluating the final performance. To overcome the commonly acknowledged weakness that CAMs can only focus on discriminative regions, several works aim at improving the training dynamic by erasing and seeking (Hou et al. 2018) or adversarial learning (Yoon et al. 2022). Some recent approaches also adopt vision transformer (Dosovitskiy et al. 2020) for the WSSS task, considering its favorable long-range modeling capability. TS-CAM (Gao et al. 2021) proposes to couple class-agnostic attention maps with semantic-aware patch tokens to promote object localization. MCTformer (Xu et al. 2022) introduces multiple class tokens so that class-specific attention maps can be generated. Other approaches incorporate extra data into training or post-processing, e.g., saliency maps (Lee et al. 2021b) or contrastive language-image pre-training (CLIP) models (Lin et al. 2023). Our solution aims at improving pseudo labels as well, but it is integrated into a single-stage framework for simplicity, and it requires neither extra data nor off-the-shelf saliency detection models. Single-stage WSSS methods treat multiple stages such as classification, pseudo label refinement, segmentation as a whole and perform joint training. 1Stage (Araslanov and Roth 2020) achieves comparable performance with dominant multi-stage approaches by ensuring local consistency, semantic fidelity and mask completeness. AFA (Ru et al. 2022) explores the intrinsic architecture of ViT and derives reliable semantic affinity from multi-head self-attention for pseudo label refinement. ToCo (Ru et al. 2023) tackles the issue of over-smoothing observed in ViT by supervising the final patch tokens with intermediate knowledge. Despite the simplified and streamlined training procedure, single-stage methods often suffer from inferior performance compared with multi-stage ones. In this work, we achieve superior semantic segmentation results using a single-stage framework by discovering and reinforcing underlying semantic layouts. Self-Distillation Self-distillation associates self-supervised learning (He et al. 2020) with knowledge distillation (Hinton, Vinyals, and Dean 2015), where knowledge is transferred and learned without resorting to any labels. It is primarily designed to compress large networks, and is hoping to promote performance on downstream tasks via mimicking the output of a frozen teacher (Noroozi et al. 2018; Zhang et al. 2023; Wang et al. 2023). Recently, some approaches (Caron et al. 2021; Zhou et al. 2021) build the teacher dynamically during training, where the teacher adopts the same architecture as that of \fclass token Aggregation Module Projector Aggregation Module Projector Transformer Encoder patch token (view1) patch token (view2) masked patch token (view2) Classifier Transformer Encoder concat concat Decoder CAM\u00a0(view2) mask EMA EMA EMA predicted label pseudo label CAM (view1) Input (view2) threshold weights Input (view1) student teacher (no gradient) Figure 2: Overview of our framework. For the student pipeline, we forward one view through the encoder, and the encoded patch tokens are fed into the classifier for classification and the decoder for semantic segmentation, separately. The other view is masked and sequentially forwarded through the encoder, the aggregation module, and the projector. For the teacher pipeline, both views are propagated through the encoder, the aggregation module, and the projector. The teacher network requires no gradient and is an exponential moving average (EMA) of the student network. the student and is updated with the knowledge of past iterations. The resulting framework simplifies the training process and achieves compelling results compared with other self-training frameworks. This motivates us to adapt the core idea of self-distillation to the WSSS task for the purpose of rectifying inaccurate object boundaries as well as improving discriminative object features. Methodology A Single-Stage Framework for WSSS The proposed single-stage framework for WSSS is illustrated in Figure 2. We use an encoder-decoder architecture to accomplish semantic segmentation with image-level supervision. The encoder is a vision transformer supervised by image-level class labels. We adopt patch token contrast (PTC) (Ru et al. 2023) for affinity learning as it is crucial to constrain affinities between patch tokens of the last layer against over-smoothing (Gong et al. 2021). As for semantic segmentation, we borrow a lightweight convolutional decoder from DeepLab (Chen et al. 2017), which is supervised by pseudo segmentation labels that are generated from CAMs. An aggregation module is used to summarize patch tokens into one class token and an MLP-based projector to transform all tokens into an appropriate feature space for feature learning. To improve model training, we enable a student and a teacher pipeline to achieve self-distillation. Formally, let F be the transformer encoder with its output embedding dimension denoted by D, P the projector, M the masking operator, and A the aggregating operator. We start from explaining the student pipeline. An input image is randomly augmented to two distorted views: x1 and x2. Each view is subsequently divided into HW nonoverlapping patch tokens, denoted as T1 = n t(i) 1 oHW i=1 and T2 = n t(i) 2 oHW i=1 , respectively. We forward T1 into the encoder to obtain the logits Z1 = F(T1) \u2208RHW \u00d7D, which are then fed into the classifier for classification, and also the decoder for segmentation, following the standard image classification and segmentation setup. To reinforce features, we divide T2 into uncertain and confident tokens and mask the uncertain ones with learnable parameters, for which the uncertain token selection and masking approaches will be explained later. The resulting masked view, denoted as \u02c6 T2 = M(T2), is also fed into the encoder to obtain \u02c6 Z2 = F \u0010 \u02c6 T2 \u0011 . Embeddings of the unmasked confident tokens in \u02c6 Z2 are summarized into a class token by an aggregation module, denoted by A \u0010 \u02c6 Z2 \u0011 \u2208R1\u00d7D. This class token is concatenated with \u02c6 Z2, and further projected and normalized to resemble probabilities distributions in \u02c6 P2 \u2208R(1+HW )\u00d7D, as \u02c6 P2 = \u03c3 \u0010 P \u0010h A \u0010 \u02c6 Z2 \u0011 ; \u02c6 Z2 i\u0011\u0011 , (1) where \u03c3 is the row-wise softmax function, and [; ] the concatenation. We will explain the aggregation design later. The teacher shares the same architecture as the student\u2019s encoder and projector, and has a similar pipeline described by Eq. (1), except it takes the unmasked inputs T1 and T2, and returns two distributions P1 and P2 for the two views, respectively. The student output \u02c6 P2 and the teacher outputs P1 and P2 are used for feature reinforcement training. Uncertain Patch Token Selection We select uncertain patch tokens under the guidance of semantic-aware CAMs, generated using the logits computed earlier with the first view, i.e., Z1 = F(T1). We linearly project Z1 using the weights W \u2208RC\u00d7D of the classifier for \fimage classification, where C is the class number, and then normalize it by the ReLU function and min-max normalization. The normalized CAM, denoted as Mc \u2208RHW \u00d7C(0 \u2264 MC \u22641), is defined by Mc := min-max \u0000ReLU \u0000ZW \u22a4\u0001\u0001 . (2) It encodes the semantic uncertainty for each patch driven by CAM scores ZW \u22a4. Next, we identify the uncertain regions based on the normalized CAM and mask the uncertain patches, following an adaptive masking strategy. Features in non-reliable regions are considered as uncertain features. However, some reliable regions can be wrongly labeled, and their corresponding features can also be uncertain. To remedy this, we propose an adaptive masking strategy, resulting in a soft masking vector Ms \u2208RHW with each element given as M (i) s = ( ui + 1, if \u03b2l < max \u0010 M (i,:) c \u0011 < \u03b2h, ui, otherwise, (3) where ui \u223cU(0, 1) draws from a standard uniform distribution and enables a stochastic selection process. The above use of two background thresholds 0 < \u03b2l < \u03b2h < 1 for dividing patches into reliable and non-reliable ones is inspired by Zhang et al. (2020) and Ru et al. (2022), which suggests an intermediate score to be a sign of uncertainty. As a result, elements in Ms with larger values suggest uncertain patches. We use a masking ratio 0 < r < 1 to control the amount of selected uncertain patches, and defines the following binary selection mask Mb \u2208RHW with each element as M (i) b = \u001a 1, if i \u2208arg maxtop(k)(Ms), k := \u230aHW \u2217r\u230b, 0, otherwise, (4) where \u230a\u00b7\u230bdenotes the floor function. The selected uncertain patches, flagged by 1 in Mb, correspond to those top-k largevalued elements in Ms. Our masking strategy is designed to relax the hard foreground-background thresholds by the masking ratio r. When more patches are flagged as uncertain by \u03b2l < max \u0010 M (i,:) c \u0011 < \u03b2h, the selection is randomly conducted within them through ui. When less uncertain patches are flagged, part of confident patches are also selected. The original features of the selected tokens to mask are replaced by learnable parameters with the same feature dimension. Certain Feature Aggregation We design an attentive aggregation module to compress the embeddings of a sequence of N = HW patch tokens, stored in \u02c6 Z \u2208RN\u00d7D, into one class token embedding \u00af Z \u2208R1\u00d7D. As shown in Figure 3, this module contains several aggregation blocks, where each block contains a masked crossattention (MCA) layer and a feed-forward (FF) layer, given as \u00af Z(l) (o) = \u00af Z(l) + MCA \u0010 \u03b7 \u0010h \u00af Z(l); \u02c6 Z(l)i\u0011\u0011 , \u00af Z(l+1) = \u00af Z(l) (o) + FF \u0010 \u03b7 \u0010 \u00af Z(l) (o) \u0011\u0011 , (5) where l denotes the layer index and \u03b7 is the LayerNorm (Ba, Kiros, and Hinton 2016). Masked Cross-Attention Layer Norm Layer Norm Feed Forward q k v k v k v \u00d7 \u00d7 \u00d7 a a a + + + Output Input class token patch token masked patch token no attention q k v a query key value attention \\\\ \\\\ Figure 3: Illustration of the aggregation module. This module is composed of several aggregation blocks, where each block alternates in turn a cross-attention layer and a feedforward layer. The cross-attention layer computes attention between a class token and a sequence of unmasked patch tokens. MCA is analogous to self-attention (Vaswani et al. 2017), except that it computes attention between the class token and a set of unmasked patch tokens. We parameterize MCA with projection weights WQ, WK, WV , WO \u2208RD\u00d7D, and calculate the queries Q \u2208R1\u00d7D, keys K \u2208RN\u00d7D and values V \u2208RN\u00d7D by projection, so that Q = \u03b7 \u0000 \u00af Z \u0001 W \u22a4 Q , K = \u03b7 \u0010 \u02c6 Z \u0011 W \u22a4 K, V = \u03b7 \u0010 \u02c6 Z \u0011 W \u22a4 V . (6) Note that queries are derived from the class token, while keys and values are calculated on patch tokens. The masked cross-attention A \u2208R1\u00d7N is then formulated as A = \u03c3 (1 \u2212Mb) \u0000QK\u22a4\u0001 \u221a D ! . (7) The output of MCA is computed as a weighted sum of values, i.e., (AV ) W \u22a4 O . Feature Self-reinforcement We adopt self-distillation (Caron et al. 2021; Zhou et al. 2021; Oquab et al. 2023) to improve the model training for feature reinforcement. As explained earlier, given two distorted views of the same image, we compute one student output \u02c6 P2 and two teacher outputs P1 and P2, where their first element stores the aggregated token information, while the remaining the individual token content. We propose a self-reinforcement loss Lu for the uncertain tokens, as the cross-entropy loss between each student\u2019s patch token and its corresponding teacher\u2019s patch token: Lu = \u2212 1+N X i=2 M (i) b P (i) 2 log \u02c6 P (i) 2 , (8) where Mb is the mask in Eq. (4) to help select masked patch tokens. We also conduct self-reinforcement for the confident \fMethod Sup. Net. Val Test Multi-stage WSSS methods. RIB (Lee et al. 2021a) I + S RN101 70.2 70.0 EDAM (Wu et al. 2021) I + S RN101 70.9 70.6 EPS (Lee et al. 2021b) I + S RN101 71.0 71.8 SANCE (Li, Fan, and Zhang 2022) I + S RN101 72.0 72.9 L2G (Jiang et al. 2022) I + S RN101 72.1 71.7 RCA (Zhou et al. 2022) I + S RN38 72.2 72.8 SEAM (Wang et al. 2020) I RN38 64.5 65.7 BES (Chen et al. 2020) I RN101 65.7 66.6 CPN (Zhang et al. 2021) I RN38 67.8 68.5 CDA (Su et al. 2021) I RN38 66.1 66.8 ReCAM (Chen et al. 2022) I RN101 68.5 68.4 URN (Li et al. 2022b) I RN101 69.5 69.7 ESOL (Li et al. 2022a) I RN101 69.9 69.3 \u2020ViT-PCM (Rossetti et al. 2022) I RN101 70.3 70.9 \u2020MCTformer (Xu et al. 2022) I RN38 71.9 71.6 \u2020OCR (Cheng et al. 2023) I RN38 72.7 72.0 \u2020BECO (Rong et al. 2023) I MiT-B2 73.7 73.5 \u2020MCTformer+ (Xu et al. 2023) I RN38 74.0 73.6 Single-stage WSSS methods. RRM (Zhang et al. 2020) I RN38 62.6 62.9 1Stage (Araslanov and Roth 2020) I RN38 62.7 64.3 \u2020AFA (Ru et al. 2022) I MiT-B1 66.0 66.3 \u2020ToCo (Ru et al. 2023) I ViT-B 71.1 72.2 \u2020Ours I ViT-B 75.7 75.0 Table 1: Performance comparison of semantic segmentation on PASCAL VOC 2012 in terms of mIoU(%). Sup. denotes the supervision type. I: image-level class labels. S: off-the-shelf saliency maps. Net. denotes the segmentation network for multi-stage methods or the backbone for singlestage methods. RN38: Wide ResNet38 (Wu, Shen, and Van Den Hengel 2019), RN101: ResNet101 (He et al. 2016), MiT: Mix Transformer (Xie et al. 2021). \u2020 flags transformer based classification network or backbone. tokens, formulated as the cross-entropy loss on the two aggregated class tokens of the two views, given as Lc = \u2212P (1) 1 log \u02c6 P (1) 2 . (9) Following a common practice, we adopt the multi-label soft margin loss Lcls for classification, the pixel-wise crossentropy loss Lseg for segmentation, and the cosine similarity loss Laff for affinity regularization. Denote the weighting factors as {\u03bbi}5 i=1, the overall training objective is L = \u03bb1Lcls + \u03bb2Laff + \u03bb3Lseg + \u03bb4Lu + \u03bb5Lc. (10) It consolidates classification, segmentation and feature selfreinforcement within a single-stage framework. Experiments Experimental Settings Datasets We evaluate our method on two benchmarks: PASCAL VOC 2012 (Everingham et al. 2010) and MS COCO 2014 (Lin et al. 2014). PASCAL VOC contains 20 object classes and one background class. Following the common practice of previous works (Zhang et al. 2020; Method Sup. Net. Val Multi-stage WSSS methods. EPS (Lee et al. 2021b) I + S RN101 35.7 RIB (Lee et al. 2021a) I + S RN101 43.8 L2G (Jiang et al. 2022) I + S RN101 44.2 CDA (Su et al. 2021) I RN38 33.2 URN (Li et al. 2022b) I RN101 40.7 ESOL (Li et al. 2022a) I RN101 42.6 \u2020MCTformer (Xu et al. 2022) I RN38 42.0 \u2020ViT-PCM (Rossetti et al. 2022) I RN101 45.0 \u2020OCR (Cheng et al. 2023) I RN38 42.5 BECO (Rong et al. 2023) I RN101 45.1 \u2020MCTformer+ (Xu et al. 2023) I RN38 45.2 Single-stage WSSS methods. \u2020AFA (Ru et al. 2022) I MiT-B1 38.9 \u2020ToCo (Ru et al. 2023) I ViT-B 42.3 \u2020Ours I ViT-B 45.4 Table 2: Performance comparison of semantic segmentation on MS COCO 2014 in terms of mIoU(%). We use the same notations as in Table 1. Araslanov and Roth 2020; Ru et al. 2022, 2023), it is augmented with data from the SBD dataset (Hariharan et al. 2011), resulting in 10, 582, 1, 449 and 1, 456 images for training, validation and testing, respectively. MS COCO contains 80 object classes and one background class. It has 82, 081 images for training and 40, 137 images for validation. Note that we only adopt image-level labels during the training phase. We report mean Intersection-over-Union (mIoU) as the evaluation metric. Implementation Details We adopt ViT-B (Dosovitskiy et al. 2020) pretrained on ImageNet (Deng et al. 2009) as the transformer encoder. The convolutional decoder refers to DeepLab-LargeFOV (Chen et al. 2017). We use two aggregation blocks in the aggregation module. The projector comprises a 3-layer perceptron and a weight-normalized fully connected layer (Caron et al. 2021). Parameters in the aggregation module and the projector are randomly initialized. We use a light data augmentation: random resized cropping to 448 \u00d7 448 with the scale [0.32, 1.0] and the ratio [3/4, 4/3], random horizontal flipping, and random color jittering. The student network is optimized with AdamW (Loshchilov and Hutter 2017). The base learning rate is warmed up to 6e \u22125 at the first 1,500 iterations and decayed with a cosine schedule. The weighting factors (\u03bb1, \u03bb2, \u03bb3, \u03bb4, \u03bb5) are set to (1.0, 0.2, 0.1, 0.1, 0.1). The teacher network requires no gradient and is updated with the EMA momentum. Experimentally, we find that synchronizing the teacher encoder with the student (i.e., momentum is 0.0) works better. The momentum for the teacher projector is 0.996 and increases to 1.0 with a cosine schedule during training. We embrace the centering and sharpening technique suggested in (Caron et al. 2021) to avoid collapsed solutions. The masking ratio r is 0.4 for adaptive uncertain feature selection. The background scores (\u03b2l, \u03b2h) introduced to determine uncertain regions are (0.2, 0.7). Training iterations are 20,000 for PASCAL VOC 2012 and 80,000 for MS COCO 2014. We use multi-scale \fGT AFA ToCo Ours AFA ToCo Ours Figure 4: Visualization results of CAMs and predicted segmentation labels with SOTA single-stage frameworks (i.e., AFA and ToCo). (left) Ground truth. (middle) Comparison results of CAMs. (right) Comparison results of predicted segmentation labels. testing and dense CRF (Chen et al. 2014) at test time following (Ru et al. 2022, 2023). Comparison with State-of-the-arts PASCAL VOC 2012 Table 1 shows comparison results of our proposed Feature Self-Reinforcement (FSR) with other state-of-the-art methods on PASCAL VOC 2012. As can be seen from this table, FSR significantly outperforms other single-stage approaches, achieving 75.7% and 75.0% mIoU on the validation and test sets, respectively. It is noticeable that our method achieves even higher mIoU than several sophisticated multi-stage methods, e.g., FSR surpasses BECO (Rong et al. 2023) by margins of 2.0% and 1.5%. Compared with multi-stage methods using both image-level labels and off-the-shelf saliency maps, e.g., L2G (Jiang et al. 2022) and RCA (Zhou et al. 2022), our method still achieves superior performance. We assume although saliency maps are effective in providing additional background clues, our method can strengthen both confident regions (mostly the main body of objects or the background) and uncertain regions (mostly object boundaries), so that semantically distinct objects can be better differentiated. Moreover, it shows that recent methods with transformer-based networks (with \u2020) generally outperform those with convolutional networks (without \u2020). Nevertheless, due to the difficulty of end-to-end optimization, single-stage transformer-based methods (e.g., ToCo reports 71.1% and 72.2%) can only achieve comparable performance with multi-stage ones (e.g., BECO reports 73.7% and 73.5%). Our method proves the efficacy of transformer-based single-stage training by attaining even better results. MS COCO 2014 Table 2 gives comparison results of semantic segmentation on a more challenging benchmark MS COCO 2014. We achieve 45.5% mIoU on the validation set, which outperforms previous single-stage solutions and is slightly better than multi-stage MCTformer+ (Xu et al. 2023) by 0.2%. This further demonstrates the superiority of our proposed method. Visualization Results In Figure 4, we visualize CAMs derived from the classifier and semantic segmentation labels Edge CAM CAM mask ratio (strict) (strict) 0.1 0.2 0.3 0.4 0.5 Pseu. label results (%) random 73.1 73.6 74.1 74.2 73.2 uncertain 73.3 74.0 74.1 74.2 73.9 74.4 73.7 Pred. label results (%) random 71.7 72.3 71.3 72.3 71.2 uncertain 71.6 71.8 72.2 72.3 72.0 72.5 72.1 Table 3: Ablation results of uncertain feature selection methods. \u201crandom\u201d means random masking, \u201cuncertain\u201d means our adaptive masking strategy that gives priority to masking uncertain regions. Masking unc.FSR cer.FSR Pseu. (%) Pred. (%) 71.1 67.9 CAM \u2713 74.4+3.3 72.5+4.6 \u2713(GAP) 72.3+1.2 70.9+3.0 \u2713(GMP) 71.8+0.7 70.0+2.1 \u2713(MCA) 75.2+4.1 73.3+5.4 \u2713 \u2713(MCA) 75.7+4.6 73.6+5.7 Table 4: Ablation results of unc.FSR and cer.FSR. \u201cGAP\u201d, \u201cGMP\u201d, and \u201cMCA\u201d are aggregation methods of cer.FSR. predicted by the decoder of three single-stage methods, i.e., AFA (Ru et al. 2022), ToCo (Ru et al. 2023) and our proposed FSR. Compared with AFA, ToCo and FSR can generate more integral and deterministic CAMs. For instance, the wheels of \u201cmotorbike\u201d are mildly activated by AFA while strongly confirmed by ToCo and FSR. This proves the effectiveness of FSR for uncertain features. However, AFA only focuses on boosting uncertain features, whereas our method enhances both uncertain and certain ones. For instance, AFA mistakes \u201cdrawer\u201d as \u201cchair\u201d, while FSR successfully recognizes the different semantics. This shows the importance of FSR for seemingly certain features. Ablation Studies In this section, we present extensive ablation studies to verify the effectiveness of our proposed FSR. We report segmentation performance of pseudo labels (Pseu.) derived from CAMs as well as predicted labels (Pred.) generated by the decoder. All results are evaluated on PASCAL VOC 2012 val set. Dense CRF is not applied with ablations. Analysis of Uncertain Feature Selection In Table 3, we compare two strict selection methods for uncertain features: edge-based selection and CAM-based selection. For edgebased selection, we choose the conventional Canny edge detector to extract edges in an image and generate exact masks of these edges. Activation thresholds for CAM-based selection are (0.2, 0.7). CAM-based selection is marginally better than edge-based selection; the improvement continues when CAM-based selection is relaxed, i.e., uncertain features are not strictly but preferentially masked. Empirically, we find that r = 0.4 gives the best result. In addition, uncertain feature masking achieves higher performance than random \fbaseline+unc.FSR baseline baseline+cer.FSR baseline +unc.FSR+cer.FSR uncertain regions (pale color) Figure 5: FSR optimizes the boundary regions (e.g., dashed red box) through the adaptive masking of regions characterized by uncertainty and the integration of unc.FSR and cer.FSR. (1) w/o unc.FSR (2) w/ unc.FSR Figure 6: Average attention entropy of different attention heads (dots) across different layers. feature masking in most cases, showing it is important to reinforce uncertain features for semantics clarification. Analysis of Feature Self-reinforcement Table 4 shows the ablation results of FSR on uncertain regions (unc.FSR) and on certain regions (cer.FSR). The masking ratio is set to 0.4 for comparison. It demonstrates the advancement of unc.FSR by achieving 74.4% (compared to 71.1%) on pseudo labels and 72.5% (compared to 67.9%) on predicted labels. This proves that reinforcing uncertain features, which mainly contain ambiguous object boundaries and misclassified categories, is fairly effective. When combining unc.FSR with cer.FSR, the quality of pseudo labels can be further improved, from 74.4% to 75.7%; predicted labels directly supervised by pseudo labels are promoted as well, from 72.5% to 73.6%. This indicates that reinforcing confident features is complementary to unc.FSR with enhanced global understanding. We showcase examples of our FSR strategy and its effect on object boundaries in Figure 5. (a) Analysis of unc.FSR To gain a deep understanding of unc.FSR, we investigate the training process by analyzing the attention mechanism. Specifically, we compute average attention entropy (Attanasio et al. 2022) for each attention head across transformer layers. As shown in Figure 6, the entropy at shallow layers (e.g., layer 0 to 6) holds similar without unc.FSR; however, it becomes higher and tighter at deep layers (e.g., layer 7 to 11) when unc.FSR is applied. A large entropy for a specific token indicates that a broad context contributes to this token, while a small entropy tells the bird tvmonitor bottle, person chair, sofa Figure 7: Class-to-patch attention maps derived from the aggregation module. Class labels are displayed below. Ours +GaussianBlur +Solarization AutoAugment Pseu. (%) 75.7 75.9 \u00b1 0.05 75.3 \u00b1 0.12 74.8 \u00b1 0.09 Pred. (%) 73.6 73.6 \u00b1 0.02 73.2 \u00b1 0.06 72.8 \u00b1 0.04 Table 5: 10-trial experimental results of data augmentations. \u201cOurs\u201d is our default data augmentation setting. opposite. From this point of view, we assume that unc.FSR benefits semantic segmentation by improving the degree of contextualization at deep layers. (b) Analysis of cer.FSR We compare our attentive aggregation of certain features (MCA) with two conventional methods: Global Average Pooling (GAP) and Global Maximum Pooling (GMP). GAP assigns an equal weight to each unmasked patch token, while GMP picks up the dominant one along each dimension. Table 4 shows that GAP performs better than GMP, as GMP tends to intensify the most discriminative features, which may have an adverse effect in recognizing an integral object. It is noticeable that MCA outperforms GAP by a large margin, indicating an attentive weighting mechanism is superior to average weighting. We visualize class-to-patch attention maps in Figure 7, which illustrates that the class token can adaptively learn to pay attention to object regions. Note that the class token is not directly supervised by classification in our design; it interacts with unmasked patch tokens and learns to summarize effective information from them. Data Augmentation We present comparison results with other data augmentations in Table 5, which reveals that data augmentations have limited impacts on the performance. For example, the performances display variations within the anticipated range when incorporating GaussianBlur or Solarization. Even when we substitute the data augmentation with the robust AutoAugmentation (Cubuk et al. 2018), the results witness a slight decline as a strong augmentation may interfere with the segmentation objective."
},
{
"url": "http://arxiv.org/abs/2401.06197v1",
"title": "Efficient Deformable ConvNets: Rethinking Dynamic and Sparse Operator for Vision Applications",
"abstract": "We introduce Deformable Convolution v4 (DCNv4), a highly efficient and\neffective operator designed for a broad spectrum of vision applications. DCNv4\naddresses the limitations of its predecessor, DCNv3, with two key enhancements:\n1. removing softmax normalization in spatial aggregation to enhance its dynamic\nproperty and expressive power and 2. optimizing memory access to minimize\nredundant operations for speedup. These improvements result in a significantly\nfaster convergence compared to DCNv3 and a substantial increase in processing\nspeed, with DCNv4 achieving more than three times the forward speed. DCNv4\ndemonstrates exceptional performance across various tasks, including image\nclassification, instance and semantic segmentation, and notably, image\ngeneration. When integrated into generative models like U-Net in the latent\ndiffusion model, DCNv4 outperforms its baseline, underscoring its possibility\nto enhance generative models. In practical applications, replacing DCNv3 with\nDCNv4 in the InternImage model to create FlashInternImage results in up to 80%\nspeed increase and further performance improvement without further\nmodifications. The advancements in speed and efficiency of DCNv4, combined with\nits robust performance across diverse vision tasks, show its potential as a\nfoundational building block for future vision models.",
"authors": "Yuwen Xiong, Zhiqi Li, Yuntao Chen, Feng Wang, Xizhou Zhu, Jiapeng Luo, Wenhai Wang, Tong Lu, Hongsheng Li, Yu Qiao, Lewei Lu, Jie Zhou, Jifeng Dai",
"published": "2024-01-11",
"updated": "2024-01-11",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"label": "Original Paper",
"paper_cat": "Semantic AND Segmentation AND Image",
"gt": "We introduce Deformable Convolution v4 (DCNv4), a highly efficient and\neffective operator designed for a broad spectrum of vision applications. DCNv4\naddresses the limitations of its predecessor, DCNv3, with two key enhancements:\n1. removing softmax normalization in spatial aggregation to enhance its dynamic\nproperty and expressive power and 2. optimizing memory access to minimize\nredundant operations for speedup. These improvements result in a significantly\nfaster convergence compared to DCNv3 and a substantial increase in processing\nspeed, with DCNv4 achieving more than three times the forward speed. DCNv4\ndemonstrates exceptional performance across various tasks, including image\nclassification, instance and semantic segmentation, and notably, image\ngeneration. When integrated into generative models like U-Net in the latent\ndiffusion model, DCNv4 outperforms its baseline, underscoring its possibility\nto enhance generative models. In practical applications, replacing DCNv3 with\nDCNv4 in the InternImage model to create FlashInternImage results in up to 80%\nspeed increase and further performance improvement without further\nmodifications. The advancements in speed and efficiency of DCNv4, combined with\nits robust performance across diverse vision tasks, show its potential as a\nfoundational building block for future vision models.",
"main_content": "Introduction In the field of computer vision, there is an ongoing debate about whether convolutional networks (ConvNets) or Trans* Equal contribution B Corresponding author (daijifeng@tsinghua.edu.cn) (56x56)x128 (28x28)x256 (14x14)x512 Input T ensor Shape 0.5 1.0 1.5 2.0 Relative Runtime 3.76x faster FlashAttention Window Attention DWConv DCNv3 DCNv4 (a) 0 3 6 9 12 15 Iterations (K) 10 20 30 40 50 60 70 ImageNet T op-1 Acc (%) DCNv4 DWConv Dense-Attn DCNv3 (b) Figure 1. (a) We show relative runtime with DCNv3 as the baseline. DCNv4 shows significant speedup over DCNv3, and surpasses other common vision operators. (b) With the same network architecture, DCNv4 converges faster than other operators, while DCNv3 falls behind in the initial training phase. formers offer superior performance. In recent years, Transformer models [12, 25, 44] have achieved remarkable results in large vision models with the attention mechanism, showing the potential to overtake ConvNets. However, recent works such as InternImage [38] and ConvNeXt [26] demonstrate that ConvNet-based vision models retain robust performance, efficiency, simplicity, and suitable inductive bias for various downstream tasks [15, 41]. Notably, in domains like image generation [29, 31], convolution remains the preferred approach. This situation brings to light the enduring value of convolution-based approaches. Building on convolution\u2019s strengths, Deformable Convolution v3 (DCNv3) \u2013 the core operator of the advanced ConvNet model InternImage \u2013 innovatively combines a sparse attention mechanism with convolution: it processes each output location in a sliding window manner with a small window size (e.g. 3 \u00d7 3 = 9 points) which acts as a local, 1 arXiv:2401.06197v1 [cs.CV] 11 Jan 2024 \fsparse operator like convolution, while dynamically samples point with an adaptive range and aggregates the spatial features with input-dependent attention weights. With its small window size and ConvNet inductive bias, DCNv3 is expected to achieve a faster convergence rate and lower inference latency, especially when compared to dense global [12] or local window-based [25] attention methods. Despite these advantages, DCN has not become the goto solution for vision backbone models. This observation led us to investigate the lingering limitations of the DCN operator. The first thing we notice is the running speed. The slow speed of DCN is known to be a long-standing problem [1], as it introduces extra overhead on sampling non-nearby locations, making it not fit modern convolution algorithms. Our comparative analysis, illustrated in Fig. 1a, reveals that DCNv3 can be slower than a properly optimized dense global attention [9], highlighting the need for further optimization. Moreover, we find DCNv3 even converges slower than global attention at the initial backbone training phase, as shown in Fig. 1b, which is counter-intuitive as DCNv3 is equipped with ConvNet inductive bias. To overcome these challenges, we propose Deformable Convolution v4 (DCNv4), an innovative advancement to optimize the sparse DCN operator for practical efficiency. DCNv4 comes with a much faster implementation and an improved operator design to enhance its performance, which we will elaborate on as follows: First, we conduct instruction-level kernel profiling for existing implementation and find that DCNv3 is already lightweight. The compute cost is less than 1%, while memory access costs 99%. This motivates us to revisit the operator implementation and find that many memory accesses in the DCN forward process are redundant and thus can be optimized, leading to a much faster DCNv4 implementation. Second, drawing inspiration from convolution\u2019s unbounded weight range, we find that softmax normalization in spatial aggregation, a standard operation in dense attention, is unnecessary in DCNv3, as it is not a requirement for operators with a dedicated aggregation window for each location. Intuitively, softmax puts a bounded 0 \u223c1 value range to the weight and will limit the expressive power of the aggregation weight. This insight led us to remove the softmax in DCNv4, enhancing its dynamic property and improving its performance. As a result, DCNv4 not only converges significantly faster than DCNv3 but also accelerates forward speed by more than 3\u00d7. This improvement allows DCNv4 to fully leverage its sparse property and become one of the fastest common core vision operators. We further replace DCNv3 in InternImage with DCNv4, creating FlashInternImage. Remarkably, FlashInternImage achieves a 50 \u223c80% speed increase compared to InternImage without any additional modifications. This enhancement positions FlashInternImage as one of the fastest modern vision backbone networks while maintaining superior performance. With the help of DCNv4, FlashInternImage significantly improves the convergence speed in ImageNet classification [10] and transfer learning settings and further demonstrates improved performance in downstream tasks. Furthermore, DCNv4 shows potential as a universal vision operator in various architectures and tasks. We integrate DCNv4 into other modern backbone architectures, including ConvNeXt [26] and ViT [12], replacing depthwise convolution [6] and dense self-attention layers [35]. Surprisingly, without any hyperparameter adjustments, these meticulously designed networks with DCNv4 perform on par while being much faster, showing the efficacy and efficiency of the dynamic, sparse DCNv4. Moreover, we explore the potential of DCNv4 in generative models as a new application domain. Specifically, we apply it in the U-Net [30] architecture used in latent diffusion models [29], replacing regular convolution with DCNv4. Our experimental results show that DCNv4 can work better than the baselines in image generation, showing great potential for using DCN to improve generative models. We will release our implementation of DCNv4 and hope this efficient operator can facilitate future research in the vision community. 2. Related Work Core operators in vision models: The standard convolution [17] stands as the most prevalent and impactful operator, forming the backbone of the majority of computer vision architectures [14, 16, 32]. Nevertheless, a myriad of operators, each with unique characteristics, collectively play a crucial role in the development of computer vision. Depthwise separable convolution (DWConv) [6] separates the spatial and channel operations, and has been pivotal in developing lightweight and efficient models [26, 27]. RepLKNet [11] illustrates that a purely convolutional network, leveraging large-kernel depth-wise convolutions, can attain competitive performance in both efficiency and effectiveness. Deformable Convolution (DCN) series [7, 38, 47] significantly leaps the adaptability of convolution by adding learnable offsets to the convolutions kernels. Contrary to convolutions, attention mechanisms [35] possess the capacity to model long-range dependencies and have been successfully adopted in various computer vision tasks [3, 12, 24, 33]. Window attention [25, 36] reduces the computational complexity inherent in vanilla attention by restricting the attention operation to a fixed-size window. To mitigate the high computational complexity associated with vanilla attention, deformable attention [48] enables each query to concentrate on a select number of key sampling points, with dynamically determined locations and weights. This efficient method is widely used in the following arts perception meth2 \fModel 5th EP 10th Ep 20th Ep 50th Ep 300th Ep ConvNeXt 29.9 53.5 66.1 74.8 83.8 ConvNeXt 8.5 25.3 51.1 69.1 81.6 + softmax (-21.4) (-28.2) (-15.0) (-5.7) (-2.2) Table 1. ImageNet-1K accuracy at different training epochs. Adding softmax normalization on the convolution weights significantly affects the convergence speed and the final performance for the ConvNeXt model. ods [4, 19, 21, 22, 43, 45]. DynamicConv [40] and dynamicDWNet [13] augment depthwise convolutions (DWConv) by incorporating dynamic weights, thereby enabling the use of instance-specific weights that adapt dynamically. For non-grid structured data, sparse operators [34, 37, 42] utilize dynamic weights obtained via bilinear interpolation or in a parametric way. Memory access cost (MAC) in vision backbones: As underscored in previous studies [18, 27], FLOPs, although a frequently used metric to measure model complexity, do not accurately represent the model\u2019s speed or latency. In practical scenarios, the running speed of a model is influenced by multiple factors, not just FLOPs. Memory Access Costs (MAC) play a particularly significant role in this context. [27]. Flash-Attention [9], by reducing the number of accesses to High Bandwidth Memory (HBM), achieves a significantly faster speed in practice despite having higher FLOPs compared to vanilla attention. Although DCN operators do not exhibit a disadvantage in terms of FLOPs, their latency is considerably longer compared to DW-Conv, under the same FLOPs budget, predominantly due to substantial memory access costs. In this work, we conduct a thorough analysis and optimization of the memory access costs associated with the DCN operators, significantly accelerating the DCN\u2019s running speed. 3. Method 3.1. Rethinking the Dynamic Property in Deformable Convolution Revisiting DCNv3: Given an input x \u2208RH\u00d7W \u00d7C with height H, width W and channel C, the DCNv3 operation with K points is defined in Eq. (2) for each point p0: yg = K X k=1 mgkxg(p0 + pk + \u2206pgk), (1) y = concat([y1, y2, ..., yG], axis=-1), (2) where G denotes the number of spatial aggregation groups. For the g-th group, xg, yg \u2208RH\u00d7W \u00d7C\u2032 represents the sliced input/output feature map with C\u2032 =C/G represents the group dimension; mgk \u2208R denotes the spatial aggregation weights (also known as modulation scalar) of the k-th sampling point in the g-th group, conditioned on the input x and normalized by the softmax function along the dimension K; pk denotes the k-th location of the pre-defined grid query pixels value range (0, 1) (\u2212\u221e, +\u221e) response pixels (a) Attention Window: Share/Fixed Weights: Dynamic Value Range: Bounded (b) DCNv3 Window: Dedicated/Adaptive Weights: Dynamic Value Range: Bounded (c) Convolution Window: Dedicated/Fixed Weights: Static Value Range: Unbounded (d) DCNv4 Window: Dedicated/Adaptive Weights: Dynamic Value Range: Unbounded Figure 2. Comparisons of core operators in spatial aggregation for query pixels on different locations within the same channel. (a) Attention and (b) DCNv3 use bounded (range from 0 \u223c1) dynamic weights to aggregate spatial features, while the window (sampling point set) for attention is the same, and DCNv3 uses a dedicated window for each location. (c) Convolution has a more flexible unbounded value range for aggregation weights and uses a dedicated sliding window for each location, but the window shape and aggregation weights are input-independent. (d) DCNv4 combines their advantages, using an adaptive aggregation window and dynamic aggregation weights with an unbounded value range. sampling {(\u22121, \u22121), (\u22121, 0), ..., (0, +1), ..., (+1, +1)} as in regular convolutions and \u2206pgk is the offset corresponding to the grid sampling location pk in the g-th group. A 1 \u00d7 1 point-wise convolution on x and y can be applied before and after the DCNv3 operator to enhance the model\u2019s expressive power, following separable convolution [6]. DCNv3 is a combination of convolution and attention: on the one hand, it processes the input data in a sliding window manner, which follows convolution and inherent its inductive bias; on the other hand, the sampling offset \u2206p and spatial aggregation weight m are dynamically predicted from the input feature, showing its dynamic property and making it more like an attention mechanism. We compare different operators where each has its own property, as illustrated in Fig. 2 Softmax normalization: A key difference between convolution and DCNv3 is that DCNv3 normalizes m, the spatial aggregation weights, with a softmax function, following the convention of scaled dot-product self-attention. Conversely, convolution does not employ softmax over its weights and still works well. The reason why attention needs a softmax is straightforward: scaled dot-product self-attention with Q, K, V \u2208RN\u00d7d is defined with a formulation: softmax( 1 \u221a d QK\u22a4)V, (3) where N is the number of points in the same attention window (can be either global [12] or local [25]), d is the hidden dimension, Q, K, V are the query, key, and value matrices computed from the input. Softmax operation is required in Eq. (3) for attention; without softmax, K\u22a4V \u2208Rd\u00d7d can be calculated first, and it degrades to a linear projection for 3 \fall queries in the same attention window, resulting in degenerated performance. However, for convolutional operators like depthwise convolution and DCNv3 where each point has its own dedicated aggregation window and the values in each aggregation window are already different and there is no \u201ckey\u201d concept, such degradation issue no longer exists, and the normalization becomes unnecessary. In fact, normalizing convolution weights within a fixed 0-1 range using softmax could impose a significant limitation on the operator\u2019s expressive power and make the learning slower. To confirm this hypothesis, we train a ConvNeXt model and apply softmax to the 7 \u00d7 7 window of the depthwise convolution weights before convolution forward. We observe a remarkable decline in model performance as well as convergence speed from results in Tab. 1. This suggests that for operators with a dedicated aggregation window on each location like convolution or DCN, aggregation weights with an unbounded range offer better expressive power than softmax-normalized, bounded-range weights. Enhancing dynamic property: Motivated by this observation, we remove the softmax normalization in DCNv3, transforming the modulation scalars ranging from 0 to 1 to unbounded dynamic weights similar to convolution. As shown in Fig. 2, this alteration further amplifies the dynamic property of DCN, where other operators have certain limits, such as bounded value range (attention/DCNv3) or fixed aggregation window with input-independent aggregation weights (convolution). Fig. 1b shows that by making this change, DCNv4 converges significantly faster than DCNv3 and other common operators, including convolution and attention. Results in Sec. 4 further showcase that DCNv4 works well in both pre-training and transfer learning settings. 3.2. Speeding up DCN Theoretically, DCN, as a sparse operator with 3 \u00d7 3 window, should act faster than other common operators that employ larger window sizes, like dense attention or 7 \u00d7 7 depthwise convolution. However, we find that this is not the case, as shown in Fig. 1a. In this subsection, we first conduct a theoretical analysis of GPU efficiency, showing a large variance in memory access cost depending on how we read the memory. We further perform optimization based on our observations, significantly improving the speed of DCN by saving additional memory instruction and bringing the speed advantage of being a sparse operator into reality. Theoretical analysis of GPU efficiency Our study begins with a theoretical examination of the DCNv3 operator\u2019s computational behavior. We employ the roofline model to evaluate its performance, focusing on theoretical FLOPs and memory access cost (MAC). For an input and output tensor of shape (H, W, C), the DCNv3 operator requires 36HWC FLOPs, where 3 \u00d7 3 represents the convolution H, W (a) DCNv3 (b) DCNv4 C channel group 1/2/3 GPU memory of input memory access request 2x fewer memory access cost output tensor GPU memory of input H, W C output tensor Thread-1 Thread-2 Thread-3 Thread-4 Thread-5 Thread-6 Thread-1 Thread-2 Thread-3 Figure 3. Illustration of our optimization. In DCNv4, we use one thread to process multiple channels in the same group that shares sampling offset and aggregation weights. Workloads like memory reading and bilinear interpolation coefficient computation can be reduced, and multiple memory access instructions can be merged. kernel\u2019s spatial dimensions and the factor of 4 accounts for the bilinear interpolation at each sampling point. Following the framework outlined in [27], DCNv3\u2019s MAC is calculated as 2HWC + 27HWG. The first term corresponds to the input/output feature map size and the second to the DCNv3\u2019s offset and aggregation weights with G groups. We approximate G as C/16 assuming a group dimsion of 16, resulting in approximately 3.7HWC MAC. However, this assumes an ideal scenario of infinite cache and a single memory read for each value, which is often unrealistic in parallel computing environments where concurrent thread execution necessitates simultaneous data access. To estimate the maximum memory access requirement, we consider a scenario devoid of cache, where each output location requires fresh memory reads and involves 36 reads for bilinear interpolation, 27 for offset/aggregation weights, and one write operation, resulting in a MAC of 64HWC. This is 17 times larger than the ideal case. This analysis reveals a substantial gap in the ratio of computation-to-memory access (ranging from 0.6 to 9.7), highlighting the significant potential for memory access optimization. Notably, despite DCNv3\u2019s use of input-dependent, dynamic offsets causing non-deterministic memory access, one deterministic thing is that channels within the same group share offset values. This leads us to propose a specific optimization strategy for enhancing DCNv3\u2019s speed. Eliminating redundant workload: In previous CUDA implementations of DCN kernel, for input with shape (H, W, C)1, offset (H, W, G, K2 \u00d7 2) and aggregation weight (H, W, G, K2), we will create H \u00d7W \u00d7C threads in total to maximize parallelism, where each thread processes one channel for one output location. Notably, the D = C/G channels within each group share the same sampling offset and aggregation weight values for each output location. Using multiple threads to process these D channels on the same output location is wasteful, as different threads will 1We assume the batch size is one and the memory layout is channel-last for simplicity 4 \fOperator Runtime (ms) 56 \u00d7 56 \u00d7 128 28 \u00d7 28 \u00d7 256 14 \u00d7 14 \u00d7 512 7 \u00d7 7 \u00d7 1024 14 \u00d7 14 \u00d7 768 Attention (torch) 30.8 / 19.3 3.35 / 2.12 0.539 / 0.448 0.446 / 0.121 0.779 / 0.654 FlashAttention-2 N/A / 2.46 N/A / 0.451 N/A / 0.123 N/A / 0.0901 N/A / 0.163 Window Attn (7 \u00d7 7) 4.05 / 1.46 2.07 / 0.770 1.08 / 0.422 0.577 / 0.239 1.58 / 0.604 DWConv (7 \u00d7 7, torch) 2.02 / 1.98 1.03 / 1.00 0.515 / 0.523 0.269 / 0.261 0.779 / 0.773 DWConv (7 \u00d7 7, cuDNN) 0.981 / 0.438 0.522 / 0.267 0.287 / 0.153 0.199 / 0.102 0.413 / 0.210 DCNv3 1.45 / 1.52 0.688 / 0.711 0.294 / 0.298 0.125 / 0.126 0.528 / 0.548 DCNv4 0.606 / 0.404 0.303 / 0.230 0.145 / 0.123 0.0730 / 0.0680 0.224 / 0.147 Table 2. Op-level benchmark on standard input shape with various downsample rates. FP32/FP16 results are reported when the implementation is available. Our new DCNv4 can surpass all other commonly used operators under different input resolutions. read the same sampling offset and aggregation weight values from GPU memory multiple times, which is critical for a memory-bound operator. Processing multiple channels within the same group on each output location with one thread can eliminate these redundant memory read requests, greatly reducing memory bandwidth usage. As the sampling locations are the same, we can also only calculate the bilinear interpolation coefficient used in DCN once. Specifically, if each thread processes D\u2032 channels, the memory access cost for reading offset and aggregation weight, as well as the computation cost for calculating bilinear interpolation coefficient, can both be reduced D\u2032 times. Eliminating redundant memory instructions: In practice, solely reusing threads for multiple channels will not see speed improvement. The reason is that when D\u2032 increases, we create fewer threads and the workload of each thread now increases D\u2032 times. This essentially reduces the degree of parallelism for the CUDA kernel. Luckily, the DCN kernel is now computationally lightweight as the bilinear interpolation only needs to be performed once for all D\u2032 channels, and most of the workload is the memory instructions reading input values from different channels. When the memory layout is channel-last, and all D\u2032 channel values are contiguous, we can leverage vectorized load: for example, to read four 32-bit float values from memory, instead of reading one 32-bit float value four times with four instructions, we can use a single instruction to load a 128-bit packed value at once, thus reducing the number of instructions and execution time of each thread. We can apply similar technique when writing the final results to GPU memory, minimizing the memory access time and increasing memory bandwidth utilization. Moreover, the modern halfprecision data format (float16/bfloat16) halves the bytes that need to be loaded, which means the memory efficiency can be twice as much under the same memory bandwidth when using the half-precision format. However, we do not see speed improvement with half-precision data in the original DCNv3 implementation, possibly due to too much overhead on data access and computation, while in our new implementation, the speedup is significant. It is worth noting that the aforementioned optimization techniques can also be applied to DCNv1/v2 and deformable attention [48], as they share a similar performance bottleneck and issue. Micro design in DCN module: DCNv3 module introduces multiple micro designs; as the core kernel is optimized, their impact on the speed becomes non-negligible. We identify two points in DCNv3 designs that could be further optimized: first, after removing the softmax and transforming the modulation scalar into dynamic aggregation weights as mentioned in the previous paragraph. The linear layers for computing offset and dynamic weights can actually be combined into one linear layer. This reduces network fragmentation and eliminates extra overheads, such as kernel launching and synchronization, enhancing run-time efficiency on the GPU; second, in the original DCNv3 module design, a complex sub-network that consists of depthwise 3\u00d73 conv, layer normalization (LN), GELU, and linear layer is used to compute offsets and dynamic weights. Following the design in Xception [6], we remove the additional LN-GELU layers and use the original separable convolution structure, further reducing running time. We empirically find that if latency is a higher priority, the depthwise convolution can also be removed with only a minor performance sacrifice. 4. Experiments In this section, we verify the effectiveness of our proposed DCNv4 module from both speed as well as performance perspective. We first benchmark the operator-level speed and integrate DCNv4 into the backbone model to test the system-level performance further. All speed test results are obtained with an NVIDIA A100 80G SXM GPU. Due to the space limit, we include additional experimental results and implementation details, including other hyperparameter settings and hardware/software environment, in supp. 4.1. Speed Benchmark for Operators Settings: We conduct the op-level benchmark by solely measuring the running time of several representative operators building state-of-the-art vision backbone models, including full attention [35] implemented with PyTorch as well as the advanced FlashAttention-2 [8] implementation, window5 \fOperator Runtime (ms) 200 \u00d7 320 \u00d7 128 100 \u00d7 160 \u00d7 256 50 \u00d7 80 \u00d7 512 25 \u00d7 40 \u00d7 1024 64 \u00d7 64 \u00d7 768 Attention (torch) OOM / OOM 25.4 / 12.9 2.88 / 1.89 0.490 / 0.309 4.17 / 2.57 FlashAttention-2 N/A / 13.2 N/A / 1.74 N/A / 0.285 N/A / 0.0797 N/A / 0.437 Window Attn (7 \u00d7 7) 1.33 / 0.509 0.728 / 0.291 0.426 / 0.186 0.279 / 0.165 0.673 / 0.272 DWConv (7 \u00d7 7, torch) 0.634 / 0.608 0.313 / 0.315 0.167 / 0.158 0.0943 / 0.0894 0.260 / 0.253 DWConv (7 \u00d7 7, cuDNN) 0.331 / 0.282 0.188 / 0.168 0.114 / 0.115 0.0817 / 0.0881 0.161 / 0.156 DCNv3 0.472 / 0.493 0.244 / 0.253 0.128 / 0.132 0.0737 / 0.0767 0.194 / 0.199 DCNv4 0.210 / 0.136 0.124 / 0.0895 0.0707 / 0.0589 0.0452 / 0.0426 0.103 / 0.0672 Table 3. Op-level benchmark on high-resolution input shape with various downsample rates. DCNv4 performs well as a sparse operator, surpassing all other baselines, while dense global attention is slow under this scenario. based attention with window size 7 \u00d7 7 [25], depthwise convolution with 7 \u00d7 7 window, implemented by cuDNN [5] and ATen library from PyTorch [28], respectively. For simplicity, we only benchmark the core operation for spatial aggregation, and additional linear layers like qkv projection and output projection layers are excluded and not included in the runtime measurement. Please refer to supp. for a more comprehensive module-level comparison. For the input scale, we first consider a feature map shape generated from the standard 224 \u00d7 224 input resolution for image classification with 4, 8, 16, 32\u00d7 downsample ratio as used by common hierarchical ConvNet/transformer backbones; we also add a feature map shape from isotropic backbone like ViT with a downsampling ratio 16 and larger hidden dimension. We further consider high-resolution inputs often used in downstream tasks like object detection. We set the input shape to be 800 \u00d7 1280 and 1024 \u00d7 1024 for the hierarchical feature map and isotropic feature map, respectively, as they are the common practice in object detection [15, 20]. Batch size is 64 and 1 for these two input sets, respectively. For operators with a head/group concept, we set the dimension of each head/group to 32 and change the number of heads/groups when the hidden dimension varies. Results: We show the benchmark results on standard resolution and high-resolution input in Tab. 2 and Tab. 3, respectively. We report results with both FP32 and FP16 data formats except FlashAttention, which does not have an FP32 implementation. Dense global attention implemented with PyTorch performs significantly slower when the input resolution is large and even out of memory. FlashAttention significantly improves the speed of attention and can be even faster than 7\u00d77 window attention in certain cases, indicating the importance of proper optimization. However, it does not change the quadratic complexity of attention; when the input resolution is high, it still falls behind local/sparse operators like window attention or convolution. While DCNv3 can be faster than DWConv with plain implementation, it is slower than the heavily optimized cuDNN version. Instead, our DCNv4 can provide more than 3\u00d7 speedup compared to DCNv3, greatly saving the running time. Moreover, DCNv4 can successfully leverage the advantage of using a 3 \u00d7 3 sparse window to perform significantly faster than other baselines under different settings. 4.2. Image Classification Settings: We evaluate the effectiveness of DCNv4 on the ImageNet classification task. We start from a strong baseline, InternImage [38], as it shows state-of-the-art performance among ConvNet-based models. We replace the original DCNv3 in InternImage with DCNv4 and create FlashInternImage. All other implementation details, including network architecture and hyperparameters, are kept the same as [38]. We also compare Swin-Transformer and ConvNeXt which are two representative baselines in Transformer and ConvNet models. We follow the common practice [25, 26, 38] and train FlashInternImage-Tiny/Small/Base on ImageNet1K for 300 epochs. FlashInternImage-Large is first trained on ImageNet-22K for 90 epochs and then fine-tuned on ImageNet-1K for 20 epochs. Other baselines share the same setting for a fair comparison. Results: Tab. 4 shows the results of models at various scales. Besides the model size and training/inference resolution, we also report each model\u2019s overall throughput (number of images per second) in FP32/FP16 data format. For a fair comparison, we use timm [39] implementation of ConvNeXt and Swin Transformer, which is actually faster than the original implementation. Equipped with DCNv4, FlashInternImage significantly improves the throughput of the InternImage counterpart over 50% \u223c80% and slightly improves the model performance. FlashInternImage now matches the speed of ConvNeXt with higher accuracy. It is worth noting that FlashInternImage-S can outperform ConvNeXt-B (84.4% vs. 83.8%) while being faster than it, showing a better speed-accuracy trade-off. Moreover, the FlashInternImage-L can even surpass ConvNeXt-XL and InternImage-XL and being 30% \u223c130% (401 vs. 174) faster, showing the effectiveness of our DCNv4 module. 6 \fModel Size Scale Acc Throughput Swin-T 29M 2242 81.3 1989 / 3619 ConvNeXt-T 29M 2242 82.1 2485 / 4305 InternImage-T 30M 2242 83.5 1409 / 1746 FlashInternImage-T 30M 2242 83.6 2316 / 3154 (+64%/ + 80%) Swin-S 50M 2242 83.0 1167/2000 ConvNeXt-S 50M 2242 83.1 1645/2538 InternImage-S 50M 2242 84.2 1044/1321 FlashInternImage-S 50M 2242 84.4 1625 / 2396 Swin-B 88M 2242 83.5 934 / 1741 ConvNeXt-B 89M 2242 83.8 1241 / 1888 InternImage-B 97M 2242 84.9 779 / 1030 FlashInternImage-B 97M 2242 84.9 1174 / 1816 (+51%/ + 76%) Swin-L 197M 3842 87.3 206 / 301 ConvNeXt-L 198M 3842 87.5 252 / 436 InternImage-L 223M 3842 87.7 158 / 214 ConvNeXt-XL 350M 3842 87.8 170 / 299 InternImage-XL 335M 3842 88.0 125 / 174 FlashInternImage-L 223M 3842 88.1 248 / 401 (+57%/ + 87%) Table 4. Image classification performance on ImageNet-1K. We show relative speedup between FlashInternImage w/ DCNv4 and its InternImage counterparts. DCNv4 significantly improves the speed while shows state-of-the-art performance. 4.3. Downstream Tasks with High-Resolution Input We further evaluate the performance of DCNv4 on representative downstream perception tasks with high-resolution input, including instance segmentation, semantic segmentation and 3D object detection. We keep all implementation details the same as InternImage and only change the backbone model to FlashInternImage for a fair comparison. The backbone models are initialized from the ImageNet pretrained weights when training the downstream models. Instance Segmentation: We train FlashInternImage with two representative instance segmentation frameworks, Mask R-CNN [15] and Cascade Mask-RCNN [2], on COCO dataset [23] at 1\u00d7 (12 epochs) and 3\u00d7 (36 epochs) training schedules. The results are shown in Tab. 5. We also report FPS with batch size 16 in FP32/FP16 data format. FlashInternImage shows superior results on all model scales and training schedules, achieving a higher speed-accuracy tradeoff. For example, FlashInternImage-T/S surpasses all other models with the same scale and is on par with a larger InternImage-S/B while being 80% \u221290% faster. Semantic Segmentation: We train FlashInternImage with UperNet [41] on ADE20K [46] dataset for 160K iterations. We can draw a similar conclusion as instance segmentation from the results in Tab. 6, with FPS reported with batch size 16 in FP32/FP16. FlashInternImage w/ DCNv4 can Model #param FPS Mask R-CNN 1\u00d7 3\u00d7+MS APb APm APb APm Swin-T 48M 66 / 106 42.7 39.3 46.0 41.6 ConvNeXt-T 48M 78 / 113 44.2 40.1 46.2 41.7 InternImage-T 49M 54 / 69 47.2 42.5 49.1 43.7 FlashInternImage-T 49M 72 / 102 48.0 43.1 49.5 44.0 Swin-S 69M 45 / 77 44.8 40.9 48.2 43.2 ConvNeXt-S 70M 54 / 83 45.4 41.8 47.9 42.9 InternImage-S 69M 44 / 56 47.8 43.3 49.7 44.5 FlashInternImage-S 69M 57 / 83 49.2 44.0 50.5 44.9 Swin-B 107M 33 / 59 46.9 42.3 48.6 43.3 ConvNeXt-B 108M 43 / 70 47.0 42.7 48.5 43.5 InternImage-B 115M 33 / 43 48.8 44.0 50.3 44.8 FlashInternImage-B 115M 44 / 67 50.1 44.5 50.6 45.4 Model #param FPS Cascade Mask R-CNN 1\u00d7 3\u00d7+MS APb APm APb APm Swin-L 253M 20 / 26 51.8 44.9 53.9 46.7 ConvNeXt-L 255M 26 / 40 53.5 46.4 54.8 47.6 InternImage-L 277M 20 / 26 54.9 47.7 56.1 48.5 ConvNeXt-XL 407M 21 / 32 53.6 46.5 55.2 47.7 InternImage-XL 387M 16 / 23 55.3 48.1 56.2 48.8 FlashInternImage-L 277M 26 / 39 55.6 48.2 56.7 48.9 Table 5. Object detection and instance segmentation performance on COCO val2017. APb and APm denotes box AP and mask AP, respectively. \u201cMS\u201d means multi-scale training. FlashInternImage w/ DCNv4 models converge faster, clearly outperform other baselines with 1\u00d7 training schedule, and still maintain a leading position when training 3\u00d7 longer while being significantly faster than InternImage. achieve significantly faster speed and further improve the performance of InternImage across different model scales, resulting in a new state-of-the-art. 3D Detection: We further test DCNv4 on the camerabased 3D object detection task in the autonomous driving scenario, We train BEVFormer v2 [43], a state-of-the-art multi-camera 3D object detector, with FlashInternImageSmall and Base backbone models on the nuScenes dataset for 24 epochs. We report results on the nuScenes test set in Tab. 7 with FPS for each model. We note that the header parts, such as the BEV encoder and object decoder in BEVFormer v2, are underoptimized and take more than 50% of the running time (and even more with a faster backbone); thus, we also report the FPS for the backbone for a clearer illustration. Our results show that when only considering the backbone, FlashInternImage can be twice or even three times faster than the InternImage backbone with an on-par performance, greatly increasing the model efficiency. 4.4. DCNv4 as a Universal Operator Drop-in replacement in other vision backbones : We verify whether DCNv4 can still work well in architectures designed with other operators, such as ConvNeXt and ViT. To 7 \fModel crop #param FPS mIoU mIoU size (SS) (MS) Swin-T 5122 60M 107 / 168 44.5 45.8 ConvNeXt-T 5122 60M 120 / 184 46.0 46.7 InternImage-T 5122 59M 100 / 139 47.9 48.1 FlashInternImage-T 5122 59M 119 / 206 49.3 50.3 Swin-S 5122 81M 89 / 142 47.6 49.5 ConvNeXt-S 5122 82M 107 / 164 48.7 49.6 InternImage-S 5122 80M 89 / 123 50.1 50.9 FlashInternImage-S 5122 80M 107 / 182 50.6 51.6 Swin-B 5122 121M 77 / 126 48.1 49.7 ConvNeXt-B 5122 122M 95 / 147 49.1 49.9 InternImage-B 5122 128M 77 / 104 50.8 51.3 FlashInternImage-B 5122 128M 94 / 157 52.0 52.6 Swin-L 6402 234M 59 / 99 52.1 53.5 ConvNeXt-L 6402 235M 73 / 117 53.2 53.7 InternImage-L 6402 256M 56 / 78 53.9 54.1 ConvNeXt-XL 6402 391M 53 / 75 53.6 54.0 InternImage-XL 6402 368M 47 / 67 55.0 55.3 FlashInternImage-L 6402 256M 71 / 122 55.6 56.0 Table 6. Semantic segmentation performance on the ADE20K validation set. All models are trained with UperNet. \u201cSS\u201d and \u201cMS\u201d denote single-scale and multi-scale testing, respectively. FPS is reported with single-scale testing. FlashInternImage w/ DCNv4 achieves superior performance with competitive speed. Model NDS mAP FPS\u2020 FPS InternImage-B 62.0 54.0 8.0 2.7 InternImage-XL 63.4 55.6 4.0 2.0 FlashInternImage-S 61.7 55.5 16.8 4.1 FlashInternImage-B 63.1 57.4 12.1 3.8 Table 7. 3D detection performance of BEVFormer v2 on nuScenes test set. We report FPS for the backbone (denoted with \u2020), overall FPS results with underoptimized head implementation are also added for reference. With on-par NDS and higher mAP results, FlashInternImage can be 50 \u221290% faster than InternImage baselines or 200% \u2212300% when only considering the backbone. achieve that, we replace the attention module and depthwise convolution layer with DCNv4 in ViT and ConvNeXt and perform supervised learning on ImageNet-1K without changing any other architecture and hyperparameters, similar to FlashInternImage and InternImage. The results are shown in Tab. 4. We can see that on these architectures, which are carefully tuned for the specific operators, our DCNv4 can perform equally well. Thanks to the fast speed of DCNv4, the new model can even achieve better throughput, showcasing the superior performance of DCNv4. Drop-in replacement in diffusion model: DCN has been recognized to be an effective operator for perception tasks. As generative models become a fundamental tool for AIgenerated content (AIGC), we are also curious if DCNv4 can work well on generation tasks with diffusion-based generative models. Specifically, we choose the U-Net [30] used Model IN-1K Acc Throughput ConvNeXt-B 83.8 1241 / 1888 ConvNeXt-B + DCNv4 84.0 1495 / 2513 (+20%/ + 33%) ViT-B 81.8 1683 / 2781\u2020 ViT-B + DCNv4 81.9 2092 / 3261 (+24%/ + 17%) Table 8. DCNv4 in other architecture. We show supervised learning results on ImageNet-1K and throughput. DCNv4 achieves higher throughput with comparable accuracy. \u2020 denotes testing with the advanced FlashAttention-2 implementation. Model #param FID FPS U-Net 860M 2.94 4.82 U-Net + DCNv4 566M 2.44 4.92 Table 9. Class conditional generation on ImageNet 256x256. Latent diffusion models are trained from scratch with U-Net. We replace the convolution layer in the models with DCNv4. DCNv4 can achieve better FID results without any hyperparameter tuning. Implementation variant Module Kernel Original DCNv3 3.28 1.45 micro design 2.12 1.45 redundant memory access 2.20 1.53 redundant computation 2.18 1.51 redundant memory instr. 1.28 0.606 half-precision format 0.873 0.404 Table 10. Ablation studies of DCN\u2019s runtime (ms). We show how to achieve DCNv4 (gray row) from the original DCNv3 implementation and how different design choices affect the speed. in Stable Diffusion [29] as our baselines and replace the attention module and regular 3 \u00d7 3 convolution in U-Net. We use U-ViT\u2019s codebase, follow its training schedule, and train a latent diffusion model based on the image latent extracted from an image autoencoder provided by Stable Diffusion. We show the results in Tab. 9. We can see that DCNv4 also works well in generative modeling, achieving better results in terms of FID/Throughput with fewer parameters compared to regular convolution in U-Net. Notice that the architecture/hyperparameters may not be optimal for DCNv4, and it is possible that re-designing the models or searching for new hyperparameters for DCNv4 will give better results. 4.5. Ablation Study We conduct ablation studies in our optimization choice described in Sec. 3.2. The results are shown in Tab. 10. The time in the table is obtained with 56 \u00d7 56 \u00d7 128 input with batch size 64 and 4 groups (32 channels per group). We first remove the softmax operation and improve the micro design, which means we merge the two linear layers into one and remove costly layer norm and GELU activation in offset/aggregation weight computing, simplifying the overall modules and increasing the speed. We then start modifying 8 \fthe kernel implementation. First, we change the parallel execution pattern and let each thread process 8 channels instead of 1 channel, thus, unnecessary memory access on loading sampling offset and aggregation weight values from the GPU memory can be saved. As we expected, solely applying this change will not increase the speed as the degree of parallelism decreases, and each thread\u2019s workload increases 8 times now. The latency is increased instead. Eliminating redundant computation by reusing the bilinear interpolation coefficient (4th row) can save some time but is insignificant. Removing the redundant memory instruction via vectorized load/store can greatly reduce the workload of each thread and largely accelerate the GPU kernel speed (5th row). Using a half-precision datatype, which halves the number of bytes the kernel needs to read/write, further increases the data throughput, as shown in the 6th row. In the end, we reach the final DCNv4 design, which is three times more efficient than the original implementation. 5."
},
{
"url": "http://arxiv.org/abs/2402.00918v1",
"title": "MUSTAN: Multi-scale Temporal Context as Attention for Robust Video Foreground Segmentation",
"abstract": "Video foreground segmentation (VFS) is an important computer vision task\nwherein one aims to segment the objects under motion from the background. Most\nof the current methods are image-based, i.e., rely only on spatial cues while\nignoring motion cues. Therefore, they tend to overfit the training data and\ndon't generalize well to out-of-domain (OOD) distribution. To solve the above\nproblem, prior works exploited several cues such as optical flow, background\nsubtraction mask, etc. However, having a video data with annotations like\noptical flow is a challenging task. In this paper, we utilize the temporal\ninformation and the spatial cues from the video data to improve OOD\nperformance. However, the challenge lies in how we model the temporal\ninformation given the video data in an interpretable way creates a very\nnoticeable difference. We therefore devise a strategy that integrates the\ntemporal context of the video in the development of VFS. Our approach give rise\nto deep learning architectures, namely MUSTAN1 and MUSTAN2 and they are based\non the idea of multi-scale temporal context as an attention, i.e., aids our\nmodels to learn better representations that are beneficial for VFS. Further, we\nintroduce a new video dataset, namely Indoor Surveillance Dataset (ISD) for\nVFS. It has multiple annotations on a frame level such as foreground binary\nmask, depth map, and instance semantic annotations. Therefore, ISD can benefit\nother computer vision tasks. We validate the efficacy of our architectures and\ncompare the performance with baselines. We demonstrate that proposed methods\nsignificantly outperform the benchmark methods on OOD. In addition, the\nperformance of MUSTAN2 is significantly improved on certain video categories on\nOOD data due to ISD.",
"authors": "Praveen Kumar Pokala, Jaya Sai Kiran Patibandla, Naveen Kumar Pandey, Balakrishna Reddy Pailla",
"published": "2024-02-01",
"updated": "2024-02-01",
"primary_cat": "cs.CV",
"cats": [
"cs.CV",
"cs.AI"
],
"label": "Original Paper",
"paper_cat": "Semantic AND Segmentation AND Image",
"gt": "Video foreground segmentation (VFS) is an important computer vision task\nwherein one aims to segment the objects under motion from the background. Most\nof the current methods are image-based, i.e., rely only on spatial cues while\nignoring motion cues. Therefore, they tend to overfit the training data and\ndon't generalize well to out-of-domain (OOD) distribution. To solve the above\nproblem, prior works exploited several cues such as optical flow, background\nsubtraction mask, etc. However, having a video data with annotations like\noptical flow is a challenging task. In this paper, we utilize the temporal\ninformation and the spatial cues from the video data to improve OOD\nperformance. However, the challenge lies in how we model the temporal\ninformation given the video data in an interpretable way creates a very\nnoticeable difference. We therefore devise a strategy that integrates the\ntemporal context of the video in the development of VFS. Our approach give rise\nto deep learning architectures, namely MUSTAN1 and MUSTAN2 and they are based\non the idea of multi-scale temporal context as an attention, i.e., aids our\nmodels to learn better representations that are beneficial for VFS. Further, we\nintroduce a new video dataset, namely Indoor Surveillance Dataset (ISD) for\nVFS. It has multiple annotations on a frame level such as foreground binary\nmask, depth map, and instance semantic annotations. Therefore, ISD can benefit\nother computer vision tasks. We validate the efficacy of our architectures and\ncompare the performance with baselines. We demonstrate that proposed methods\nsignificantly outperform the benchmark methods on OOD. In addition, the\nperformance of MUSTAN2 is significantly improved on certain video categories on\nOOD data due to ISD.",
"main_content": "Introduction Video Foreground Segmentation (VFS) [6, 10, 14, 20] aims to segmenting all the moving objects in the video frames, i.e., captured from stationary/nonstationary cameras, and outputs binary foreground segmentation masks. VFS is an important, but challenging task in many computer vision applications including but not limited to efficient video surveillance [32], motion estimation and anomaly detection [5], augmented reality [12], human tracking [17], traffic monitoring [14], and action recognition [42], etc. The most common deep learning technique [22, 23, 26] for solving VFS strategy rely on a frame or image as input in a video and does not utilize temporal/motion cues, which is crucial cue for motion-based foreground segmentation. Thus, image-based foreground segmentation techniques, which relies upon appearance features, lack the capability to handle complex scenarios in a general setting. For example, consider the traffic monitoring and video surveillance scenario wherein a robust VFS system needs to perform well on both training data distribution and out-ofdistribution data under various challenges, which includes illumination variation, occlusions, dynamic backgrounds like waving tree, rain, snow, air turbulence, camouflage regions, i.e., similarity between foreground pixels and background pixels, camera motions that include camera jittering, camera panning-tilting-zooming. The structure of this paper is organized as follows: A related literature is briefly reviewed in Section 2. We present the motivation to our work and contribution summary in Section 3. In Section 4, we discuss the steps involved in the data generation pipeline along with the description of the proposed dataset. In Section 5, we discuss the proposed deep learning architectures. Experimental results corresponding to proposed methods along with benchmarks presented in Section 6. Finally, conclusions are presented arXiv:2402.00918v1 [cs.CV] 1 Feb 2024 \fScene Composition Asset Placement Lighting Placement Camera Placement 3D Environment Assets EasySynth 3D environment with Assets Outputs Figure 1. ISD Pipeline that takes 3D environment along with assets like humans as input and outputs synthetic RGB image with various annotations such as binary foreground mask, normal map, depth map, and instance semantic map. Backgrounds Classroom Stadium RGB Image Foreground mask Depth map Semantic mask Normal map Stadium Conference room Corridor House Prison Figure 2. Visualization of sample backgrounds developed in ISD (top row). Visualization of Sample images from ISD with their annotations such as binary foreground mask, normal map, depth map, and instance semantic map (bottom two rows). Table 1. Dataset description: Indoor Surveillance Data (ISD). Background Environment #Lighting Conditions # Camera Angles # RGB Images # Foreground Masks # Depth Maps # Normal Maps Classroom 2 10 20140 20140 20140 20140 Shopping Mall 2 8 15984 15984 15984 15984 Prison 2 8 16000 16000 16000 16000 Conference Hall 2 10 20040 20040 20040 20040 Sport Stadium 2 12 24086 24086 24086 24086 Corridor 2 8 16112 16112 16112 16112 Metro 2 10 20040 20140 20040 20040 House 2 9 18036 18036 18036 18036 in Section 7. 2. Related Work A large body of literature, which can be categorized into classical approaches and data-driven approaches, is available on motion-based foreground segmentation. A comprehensive survey of classical computer approaches for motion-based foreground segmentation can be found in [6, 10, 14, 27]. In this section, we briefly review the most representative deep learning works related to moving object segmentation. Classical techniques [4, 6, 7, 10, 13, 14, 21, 27, 33, 34] on motion-based foreground segmentation have explored the idea of building a background model for a specific video \fsequence to identify the foreground objects, i.e., moving objects, using thresholding methods. In past two decades, the parameterized probabilistic models, which operates at a pixel level, have been explored for background subtraction. However, these techniques are computationally expensive. Therefore, recent works on background subtraction are inspired by the nonparametric techniques to overcome the limitation of parametric methods. Deep learning architectures have been widely explored for foreground segmentation due to their ability to learn high-level representations from data [1, 8, 11, 39]. Recent deep learning works [18, 19, 22, 23, 27\u201330] have shown to be superior to traditional methods by a significant margin in computer vision tasks. Prior works on foreground segmentation explored deep learning network architectures such as convolutional neural networks (CNNs) [2,9,15,40], generative adversarial networks (GANs) [3], etc, in supervised learning framework. [9] reported the first work that explored deep learning for foreground segmentation. A patch-based technique, namely DeepBS [2] explored a CNN based architecture and training data includes both input frames and associated background images. However, these methods have drawbacks such as computationally intensive, overfitting owing to pixel redundancy, loss of contextual information, and requires a lot of patches to train patch-based models [28]. To alleviate the drawbacks associated with the patch-based methods, [22, 23, 28] proposed to feed the whole resolution images to the network as input to predict foreground masks. Cascade CNN [38] proposed a cascade structure to synthesize basic CNN and multiresolution CNNs and trained with specially chosen frames with their ground truth foreground masks. [11,30] leveraged spatio-temporal cues and these methods followed the training strategy, which involves combining image frames from the video with the generated background models. BSUVnet [35] uses a CNN based architecture for background subtraction of unseen videos. BSUV-net input includes two background frames taken at different time points along with their semantic segmentation masks and current frame. BSGAN [41] exploited the median filtering strategy to estimate the background and utilized in training the Bayesian GAN. [29] proposed UNet architecture for image segmentation and showed promising results for foreground segmentation, which is a special case of image segmentation. [26] introduced attention mechanism to further improve the performance of UNet. Another state-of-the-art network in foreground segmentation is FgSegNet [22,23] and it belongs to the category of encoder-decoder networks. FgSegNet and their variants uses three different spatial resolutions of image fed as input to encoder. Further, they use transposed CNNs in the decoder side. [28] developed deep architectures, namely MU-Net1 and MU-Net2 for segmenting moving objects. In particular, MU-Net2 considered additional cues such as background subtraction mask and flux mask along with current frame as input to improve the foreground accuracy. 3. Motivation and Contribution In this section, we present the related works along with their limitations, which motivated this work. Further, contribution is outlined in brief. 3.1. Motivation Existing data-driven techniques [22,23,26,28,29,41] for motion-based VFS are image-driven and they rely on appearance/spatial cues such as color and texture rather than the relevant attributes such as motion and temporal cues. In particular, a motion-based video foreground segmentation framework that works based on appearance cues suffer from poor performance on out-of-distribution (OOD) data since appearance cues in test data might differ from training data. FgSegNet and their variants [22,23] are considered as the state-of-the-art for the problem at hand. However, the these models are very big and their out-of-domain data performance is very poor [28]. To address, [28] proposed an architecture, namely MU-Net2, but it requires extra annotations such as background subtraction mask and flux mask along with RGB image as input. Therefore, getting those annotations during inference are challenging and expensive. A robust motion-based foreground segmentation needs temporal understanding of video data to capture higher level cues like motion along with the appearance cues. An efficient modelling of motion and temporal cues in VFS helps in overcoming poor generalizability of image-based methods on OOD video data. In this work, we propose deep models that utilizes temporal cues to improve the robustness of VFS without any additional annotations. There are several benchmark datasets for VFS such as CDnet2014 [37], SBI2015 [24], BMC2012 [36], and UCSC [25]. Although these datasets includes a lot of background environments, but still does not include all possible realistic and challenging backgrounds and foregrounds. Therefore, having an annotated benchmark dataset that includes complex background environments with complex foregrounds such as number moving objects, camera angles, etc, helps in improving the robustness of data-driven VFS. Therefore, we introduce a novel complex video dataset, which adds diversity and complexity to the existing datasets. 3.2. Our Contribution This work proposes two novel deep learning models that utilizes temporal information and also introduces a new video dataset with multiple frame-level annotations to improve the VFS performance on OOD distribution. In summary, the key contributions are given below: \fC1. We propose deep networks, namely MUSTAN1 and MUSTAN2 that utilizes temporal information at various scales in a video data as an attention to obtain an accurate estimation of foreground binary mask. C2. We propose a diverse and complex synthetic video dataset, namely Indoor Surveillance Dataset (ISD) that includes complex background environments with various camera angles. ISD has multiple annotations such as foreground binary mask, instance semantic maps, normal maps, and depth maps. Therefore, ISD can be utilized in other computer vision tasks. C3. Our methods are validated across various benchmark datasets to establish the fact that proposed temporal modelling indeed contributes to the generizability i.e., superior performance on OOD data, compared to the state-of-the-art. C4. Finally, ISD combined with benchmark datasets in training further improves the robustness of VFS. 4. Indoor Surveillance Dataset (ISD) To develop an accurate and robust deep learning model for a computer vision task, training data must include complex and diverse scenarios. However, having a real video dataset with the spatio-temporal annotations is a tedious and expensive task. Therefore, a viable solution is to generate a realistic and high quality synthetic video dataset, which includes complex and diverse environments, with annotations. 4.1. Data Generation Pipeline A synthetic and realistic video data generation pipeline shown in Fig. 1 and it is built using Unreal Engine (UE5)1 for the use case under consideration. Unreal Engine (UE5)1 is a popular platform in applications such as a virtual or augmented reality, video games, motion capture devices, etc. In UE5, we utilize Easysynth2 plugin for generating synthetic images and render the image sequence using game engine rasterizer. Generation of a realistic synthetic dataset essentially begins with creating the complex and realistic 3D environments and It includes 3 major steps: 1. Environment Creation: Create 3D environments in UE5 by fetching 3D assets from platforms such as sketchfab3 and unreal asset store4 to create complex and realistic environments. These environments serve as background for the generated images. 2. Scene Composition: Place 3D humans and objects in a 3D scene. Once the assets are in a place, we position 1https://www.unrealengine.com/en-US/unreal-engine-5 2https://github.com/ydrive/EasySynth 3https://sketchfab.com/ 4https://www.unrealengine.com/marketplace/en-US/store the cameras at various places in the scene to render unique viewpoints along with the specific lighting condition. and 3. Rendering: Render the environment using EasySynth5 plugin to generate RGB images with frame-level annotations such as binary foreground mask, depth, normal, and instance segmentation maps. The compute that we utilized for the above framework is Windows 11 PC with Nvidia RTX3050. 4.2. Dataset Description There are state-of-the-art benchmarking datasets such as CDnet2014 [37] and SBI2015 [24] for foreground segmentation. However, these datasets lack challenging backgrounds and foregrounds with various camera viewpoints for indoor surveillance. To address above limitations, we propose a new synthetic video dataset, namely Indoor Surveillance Dataset (ISD) for motion-based video foreground segmentation to bring diversity and complexity on top of the existing benchmark datasets. A brief summary of ISD is given below: \u2022 The ISD is composed of nearly 1, 50, 538 high-quality RGB images that are of realistic looking and having multiple annotations for moving objects such as foreground binary mask, instance segmentation mask, normal maps, and depth maps ( Fig. 2). \u2022 ISD has 8 backgrounds along with 8 to 12 camera views for each background. Further, it has two lighting conditions such as daylight and night light. Summery of the ISD dataset is provided in Table 1 and few sample images from ISD are illustrated in Fig. 2 for visualization. ISD also benefits computer vision tasks such as depth map prediction, and instance segmentation along with VFS. 5. Proposed Methodology We propose novel deep learning architectures for robust motion-based video foreground segmentation. Our architectures make use of multi-scale temporal information along with the spatial cues to boost the foreground segmentation accuracy on OOD. Our architectures, namely MUSTAN1 and MUSTAN2 belong to the class of encoder-decoder type of architecture, but trained with two variations in the input streams along with custom attention derived based on temporal embeddings. 5.1. MUSTAN1 MUSTAN1 (Fig. 3) consisting of two encoders and one decoder wherein one encoder is called context network (CNet), i.e., extracts temporal embeddings at multiple 5https://github.com/ydrive/EasySynth \fscales, and the other one is feature network (FNet), which extracts the spatial cues from current frame. Further, attention is derived from multi-scale temporal embeddings to highlight salient portions of current frame representations at different scales before they fed to decoder. To ensure real-time inference, encoders of our models, namely CNet and FNet are developed based on the five blocks of ResNet18 [16] and number of convolutional kernels in each of these blocks are 64, 128, 256, 512, and 1024, respectively. However, both CNet and FNet differ only in number of input channels, that is, input dimension of CNet is 3 \u2217T \u00d7 320 \u00d7 480, T is number of frames whereas input dimension of FNet is 3 \u00d7 320 \u00d7 480. In the proposed architecture, we introduced two modules, namely Feature Refinement Module (FRM) (Fig. 4) and Refine Localization Information Module (RLIM)(Fig. 5) that are inspired by the attention mechanism introduced in [26]. FRM takes both temporal context embedding and current frame embedding as input and then highlights the relevant portions of current frame embedding based on attention weights derived as shown in Fig. 4. These refined current frame embedding are utilized in skip connections. Whereas RLIM takes low resolution and high resolution embeddings of current frame and refines the semantic information present in the high resolution feature map. The decoder of our model has 4 blocks, where each block receives the feature maps that are upsampled and concatenated with the embedding coming from the RLIM in the respective skip connection. Then, concatenated embedding is fed to the 3 \u00d7 3 convolution layer followed by a ReLu activation layer. The output of the final feature extraction block of decoder is fed to 1\u00d71 convolution layer. which decreases the number of feature maps, followed by a sigmoid activation layer to produces the class label probabilities. 5.2. MUSTAN2 We used temporal embedding to refine current feature embedding through FRM in MUSTAN1. MUSTAN1 is one of the possible ways of modelling the temporal information. Therefore, we explore another temporal modelling strategy based on mid-level fusion that lead to a novel deep learning architecture, namely MUSTAN2. Fig. 6 illustrates the architecture of MUSTAN2 with a multi-scale fusion of temporal information and feature enhacement through RLIM module at various scales. MUSTAN2 consisting of three encoders and one decoder. Each encoder of MUSTAN2 takes one frame within the temporal window as input and extracts correpsonding feature representation at multiple scales. Then, feature maps of frames within the temporal window is fed to fusion block (FB) (Fig. 6) in the skip connection to decrease the number of feature maps and to have a common embedding that captures temporal information. Further, RLIM takes FB output and low resolution embedding of current frame as input and outputs refined high resolution feature map that is fed to decoder. The decoder of MUSTAN2 is similar to that of MUSTAN1 and it outputs the class label probabilities as output. 6. Experimental Results and Analysis In this section, we describe the datasets, present the experimental settings along with evaluation metrics. Further, we present the qualitative and quantitative analysis of our deep learning architectures, namely MUSTAN1 and MUSTAN2 compared to the state-of-the-art. 6.0.1 Baselines We compare our methods with the various state-of-the-art supervised deep learning methods such as UNet [29], Attention UNet [26], Cascade CNN [38], BSPVGAN [41], BSGAN [41], FgSegNet [22], FgSegNet(S) [23], FgSegNet(v2) [23], MU-Net1 [28], and MU-Net2 [28]. The source codes for FgSegNet, MU-Net1, and MU-Net2 available online 6,7. Our experimental settings are same as recommended in [23,28] for fair evaluation. 6.0.2 Datasets We consider the proposed dataset, namely ISD along with state-of-the-art datasets such as scene background initialization (SBI2015) dataset [24], which contains 14 labelled video sequences, and change detection challenge (CDnet2014) dataset [37] to asses the out-of-domain or generalization performance of proposed models. CDnet2014 has 11 realistic and challenging categories, i.e., illumination change, shadow, dynamic background motion and camera motion, etc., wherein each category contains 4 to 6 video sequences. Therefore, it has a total of 53 video sequences, which include 160K frames and 118K labeled frames with the spatial resolutions vary from 320 \u00d7 240 to 720 \u00d7 576. 6.1. Evaluation Metrics We compute the metrics such as Precision (Pr), Recall (Re), specificity (Sp), and F1 score to evaluate both InDomain and Out-Of-Domain performance of the proposed and benchmark methods. Pr = TP TP + FP , Re = TP TP + TN , (1) Sp = TP TP + FP , F1 Score = 2 \u2217Pr \u2217Re Pr + Re , (2) where TN stands for true negatives, FP denotes false positives, FN refers to false negatives, and TP stands for true positives. 6https://github.com/lim-anggun/ 7https://github.com/CIVA-Lab/Motion-U-Net \fCurrent frame <latexit sha1_base64=\"ybWPrGUa62RIFUrcCFrYXd/K8=\">AC3icbVDJSgNBEO1xjXGLevT SJAgRYpiJ60UIevEYwSyQGUJPpydp0rPQXSOGIXcv/oXD4p49Qe8+Td2kjlo4oOCx3tVNVzI8EVmOa3sbC4tLymlnLrm9sbm3ndnYbKowlZXUailC2XKY4AGrAwfBWpFkxHcFa7qD67HfvGdS8TC4g2H EHJ/0Au5xSkBLnVzeFswDO3kowtHx4LCEB5fYLGrhCu25L0+2KNOrmCWzQnwPLFSUkApap3cl90NaeyzAKgSrUtMwInIRI4FWyUtWPFIkIHpMfamgbEZ8pJr+M8IFWutgLpa4A8ET9PZEQX6mh7+pOn0Bf zXpj8T+vHYN34SQ8iGJgAZ0u8mKBIcTjYHCXS0ZBDUhVHJ9K6Z9IgkFHV9Wh2DNvjxPGpWydVY+vT0pVK/SODJoH+VREVnoHFXRDaqhOqLoET2jV/RmPBkvxrvxMW1dMNKZPfQHxucPAeSYgQ=</latexi t>{x(t \u22123k), k = 0, 1, 2} <latexit sha1_base64=\"9SsebmG/oLq3dCwAnyw+M3Fw1M=\">AB63icbVDLSgNBEOyNrxhfUY9 eBoMQL2FXfB2DXjxGMA9IljA7mU2GzMwuM7NiWPILXjwo4tUf8ubfOJvsQRMLGoqbrq7gpgzbVz32ymsrK6tbxQ3S1vbO7t75f2Dlo4SRWiTRDxSnQBrypmkTcMp51YUSwCTtvB+Dbz249UaRbJBzOJqS/ wULKQEWwy6alqTvliltzZ0DLxMtJBXI0+uWv3iAiaDSEI617npubPwUK8MIp9NSL9E0xmSMh7RrqcSCaj+d3TpFJ1YZoDBStqRBM/X3RIqF1hMR2E6BzUgvepn4n9dNTHjtp0zGiaGSzBeFCUcmQtnjaMAU JYZPLMFEMXsrIiOsMDE2npINwVt8eZm0zmreZe3i/rxSv8njKMIRHEMVPLiCOtxBA5pAYATP8ApvjnBenHfnY95acPKZQ/gD5/MHhQmN6Q=</latexit>x(t) Temporal Context Foreground Map RLIM RLIM RLIM RLIM FRM FRM FRM FRM FRM Decoder Maxpool CB CB CB CB CB CB CB CB CB CB CB: Convolution Block Conv BN RelU Conv BN RelU UCB UCB UCB UCB 1x1 Conv UCB: Upsample Convolution Block Concat Upsample Conv BN RelU CNet FNet Figure 3. MUSTAN1 architecture. BN stands for batchnorm layer, \u201dConv\u201d denotes convolutional layer, CNet denotes Context Network, FNet stands for Feature Network, FRM stands for Feature Refinement Module, and RLIM denotes Refine Localization Information Module. + Context Embedding Current Frame Embedding CONV BN CONV BN <latexit sha1_base64=\"bJSZwagpCRUSKXZKpYPOcdgnNeo=\">AB7HicbVBNS8NAEJ3U r1q/qh69BIvgqSTi17HoxWMFYwtKJvNpl262Q27E6GU/gYvHhTx6g/y5r9x2+agrQ8GHu/NMDMvygQ36HnfTmldW19o7xZ2dre2d2r7h8GpVrygKqhNLtiBgmuGQBchSsnWlG0kiwVjS8 nfqtJ6YNV/IBRxkLU9KXPOGUoJWCroV9qo1r+7N4C4TvyA1KNDsVb+6saJ5yiRSQYzp+F6G4Zho5FSwSaWbG5YROiR91rFUkpSZcDw7duKeWCV2E6VtSXRn6u+JMUmNGaWR7UwJDsyiNxX/ 8zo5JtfhmMsRybpfFGSCxeVO/3cjblmFMXIEkI1t7e6dEA0oWjzqdgQ/MWXl8njWd2/rF/cn9caN0UcZTiCYzgFH6gAXfQhAocHiGV3hzpPivDsf89aSU8wcwh84nz/uno7I</latexi t>\u2299 CONV BN ReLu Sigmoid Refined Current Frame Embedding Figure 4. Feature Refinement Module (FRM). CONV stands for convolution layer, BN stands for batch normalization layer, + denotes element wise addition, and \u00b7 denotes element wise multiplication. + LRE HRE CONV BN CONV BN <latexit sha1_base64=\"bJSZwagpCRUSKXZKpYPOcdgnNeo=\">AB7HicbVBNS8NAEJ3Ur1q/qh69BIvgqST i17HoxWMFYwtKJvNpl262Q27E6GU/gYvHhTx6g/y5r9x2+agrQ8GHu/NMDMvygQ36HnfTmldW19o7xZ2dre2d2r7h8GpVrygKqhNLtiBgmuGQBchSsnWlG0kiwVjS8nfqtJ6YNV/IBRxkLU9KXPOGUoJWCroV9qo1r+7N4C4T vyA1KNDsVb+6saJ5yiRSQYzp+F6G4Zho5FSwSaWbG5YROiR91rFUkpSZcDw7duKeWCV2E6VtSXRn6u+JMUmNGaWR7UwJDsyiNxX/8zo5JtfhmMsRybpfFGSCxeVO/3cjblmFMXIEkI1t7e6dEA0oWjzqdgQ/MWXl8njWd2/rF/c n9caN0UcZTiCYzgFH6gAXfQhAocHiGV3hzpPivDsf89aSU8wcwh84nz/uno7I</latexit>\u2299 CONV BN ReLu Sigmoid Refined HRE UPSAMPLER CONV BN ReLu Figure 5. Refine Localization Information Module (RLIM). LRE stands for low resolution embedding, HRE stands for high resolution embedding. 6.2. Hyper-parameter Settings Encoders of proposed networks are based on ResNet18 [16] and initalized with ImageNet pretrained weights. Spatial resolution of input RGB frames is set to 320 \u00d7 480. The optimizer used is Adam with learning rate initialized to 1e\u22124. The learning scheduler, namely StepLR is considered and the parameters such as step size and gamma are set to 20 and 0.1, respectively. Thus, learning scheduler reduces the learning rate by a gamma factor for every 20 epochs, which is determined by the step size. The dataset is shuffled and then split into the ratio of 90% : 10% for training and validation, respectively. Our models are trained for 40 epochs with a batch size equals to 8. Further, the loss function considered in training our networks is defined below: L(y, \u02c6 y, p) = \u03b8 \u2217Ltl + (1 \u2212\u03b8) \u2217Lbce, (3) where \u2217denotes multiplication operation, Ltl denotes Tversky loss (TL) [31], Lbce refers to the binary crossentropy (BCE) loss and \u03b8, which is set to 0.5 in our experiments, is weight parameter, i.e., determines trade-off between Ltl and Lbce. TL (Eq. 4) results in a better trade-off between precision and recall. Ltl(y, \u02c6 y, \u03b1, \u03b2) = TP TP + \u03b1FP + \u03b2FN , (4) where p network output probabilities, y is the binary ground-truth foreground mask, \u02c6 y \u2208{0, 1} is the predicted binary foreground mask, which is obtained after thresholding p, \u03b1, \u03b2 are weights associated with FPs and FNs, TP = |y\u00b7\u02c6 y|, FP = |(1\u2212y)\u00b7\u02c6 y|, FN=|y\u00b7(1\u2212\u02c6 y)|, |\u00b7| stands for cardinality measure, and \u00b7 denotes element-wise multiplication. \fCurrent frame <latexit sha1_base64=\"9SsebmG/oLq3dCwAnyw+M3Fw1M=\">AB63icbVDLSgNBEOyNrxhfUY9 eBoMQL2FXfB2DXjxGMA9IljA7mU2GzMwuM7NiWPILXjwo4tUf8ubfOJvsQRMLGoqbrq7gpgzbVz32ymsrK6tbxQ3S1vbO7t75f2Dlo4SRWiTRDxSnQBrypmkTcMp51YUSwCTtvB+Dbz249UaRbJBzOJqS/ wULKQEWwy6alqTvliltzZ0DLxMtJBXI0+uWv3iAiaDSEI617npubPwUK8MIp9NSL9E0xmSMh7RrqcSCaj+d3TpFJ1YZoDBStqRBM/X3RIqF1hMR2E6BzUgvepn4n9dNTHjtp0zGiaGSzBeFCUcmQtnjaMAU JYZPLMFEMXsrIiOsMDE2npINwVt8eZm0zmreZe3i/rxSv8njKMIRHEMVPLiCOtxBA5pAYATP8ApvjnBenHfnY95acPKZQ/gD5/MHhQmN6Q=</latexit>x(t) Foreground Map RLIM RLIM RLIM RLIM FB FB FB FB FB Decoder CB CB CB CB CB CB CB CB CB CB UCB UCB UCB UCB 1x1 Conv CB CB CB CB CB <latexit sha1_base64=\"I4jEbsTvFK7L+gVylwWjN7yb+M=\">AB73icbVDJTgJBEK3 BDXFDPXrpCZ4kMxgXI5ELx4xkSWBCelpGujQs9hdYyQTfsKLB43x6u94829sYA6KvqSl/eqUlXPi6TQaNtfVmZpeWV1Lbue29jc2t7J7+41dBgrxuslKFqeVRzKQJeR4GStyLFqe9J 3vRG1O/+cCVFmFwh+OIuz4dBKIvGEUjtYqPJTw5PS528wW7bM9A/hInJQVIUevmPzu9kMU+D5BJqnXbsSN0E6pQMknuU6seUTZiA5429CA+ly7yezeCTkySo/0Q2UqQDJTf04k1Nd67H um06c41IveVPzPa8fYv3QTEUQx8oDNF/VjSTAk0+dJTyjOUI4NoUwJcythQ6oQxNRzoTgL78lzQqZe8fHZbKVSv0jiycACHUAIHLqAKN1CDOjCQ8AQv8GrdW8/Wm/U+b81Y6cw+/IL 18Q0cKY63</latexit>x(t \u22123) <latexit sha1_base64=\"dFDShO8fUIaXMJ gShPWqHZG1/2Y=\">AB73icbVBNT8JAEJ36ifiFevTSCZ4kLQkokeiF4+YyEcCDdkuW9 iw3dbdqZEQ/oQXDxrj1b/jzX/jAj0o+JXt6bycw8PxZco+N8Wyura+sbm5mt7PbO7t5+7 uCwoaNEUVankYhUyeaCS5ZHTkK1oVI6EvWNMf3kz95iNTmkfyHkcx80LSlzglKCRWoWn Ip5XzgrdXN4pOTPYy8RNSR5S1Lq5r04voknIJFJBtG67TozemCjkVLBJtpNoFhM6JH3WNl SkGlvPLt3Yp8apWcHkTIl0Z6pvyfGJNR6FPqmMyQ40IveVPzPaycYXHljLuMEmaTzRUEib Izs6fN2jytGUYwMIVRxc6tNB0QRiairAnBXx5mTKJbdSurgr56vXaRwZOIYTKILl1CF W6hBHSgIeIZXeLMerBfr3fqYt65Y6cwR/IH1+QMgu46</latexit>x(t \u22126) FB: Fusion Block Concat Conv Figure 6. MUSTAN2 architecture. FB stands for Fusion Module, and RLIM denotes Refine Localization Information Module. Skating Input Image Ground truth mask MUSTAN1 Output MUSTAN2 Output Tram\u2028 Station Fall CopyMachine Intermittent Pan Figure 7. Visual illustration of In-Domain performance of MUSTAN1 and MUSTAN2: Proposed models trained on subset of CD2014 and tested on subset of CD2014. Each column is a particular video category available in CD2014. BCE loss Lbce is defined as Lbce(y, p) = \u2212(y \u2217log(p) + (1 \u2212y) \u2217log(1 \u2212p)). (5) 6.3. Quantitative and Qualitative Results We trained proposed networks, namely MUSTAN1 and MUSTAN2 on the challenging subset of CD2014 dataset. The training/testing splits in our experiments are same as the ones recommended in [22, 23, 28] and those are available online6. Training data (\u224810K), which is a subset of CDnet2014, consists of 200 labeled frames from each video sequence present in CDnet2014 dataset. To asses the In-Domain performance of our methods along with benchmarks, we considered the test set6 from the CDnet2014. Test data is based on 25 to 50 per video seBoard Input Image Ground truth mask MUSTAN1 Output MUSTAN2 Output Highway I Hall & Monitor HumanbodyII CAVIARI Figure 8. Visual illustration of OOD performance of MUSTAN1 and MUSTAN2: Proposed models trained on CD2014 and tested on SBI2015. Each column is a particular video category available in SBI2015. quence from the 11 categories present in CDnet2014. The images considered in the test split are not part of train split. Few sample results of our methods are illustrated in Fig. 7. One can observe that our methods produces accurate foreground segmentation masks on OOD data. Further, the performance metric, namely F1 score is computed (on the test set) for each video category of CDnet2014 and is reported in Table 2. We can infer that proposed architecture superior to the state-of-the-art in terms of average F1 score. Table 3 illustrates the performance of all the methods in terms of evaluation metrics mentioned above. As can be seen, our methods outperformed (in terms of F1 score) the state-ofthe-art except FgSegNet. One important observation from Table 4 and Table 8 is that the FgSegNet is a very big model and significantly underperforms on Out-Of-Domain data since it is overfitting the In-Domain data. \fTable 2. In-domain performance, i.e., measured in terms of F1score, of MUSTAN1 (Ours) and MUSTAN2 (Ours) compared to baselines on CD2014 dataset [37]. BW: badWeather, BL: baseline, CJ: cameraJit., DB: dynamicBg., IM: intermittent object motion, LFR: lowFrameR., NV: nightVid., SD: shadow, Ther.: thermal, Tur.: turbulence, and Avg. denotes Average. Video FTSG MU-Net1 MU-Net2 MUSTAN1 MUSTAN2 PTZ 0.3241 0.7946 0.8185 0.8985 0.9354 BW 0.8228 0.9319 0.9343 0.9553 0.9730 BL 0.9330 0.9875 0.9900 0.9304 0.9537 CJ 0.7513 0.9802 0.9824 0.9383 0.9572 DB 0.8792 0.9836 0.9892 0.9233 0.9551 IM 0.7891 0.9872 0.9894 0.9291 0.9646 LFR 0.6259 0.7237 0.8706 0.8391 0.9113 NV 0.5130 0.8575 0.8362 0.8954 0.9513 SD 0.8535 0.9825 0.9845 0.9420 0.9670 Ther. 0.7768 0.9825 0.9842 0.9248 0.9574 Tur. 0.7127 0.8499 0.9272 0.9094 0.9395 Avg. 0.7283 0.9147 0.9369 0.9168 0.9514 Table 3. Overall average Performance, i.e., in terms of F1 score, Precision, and Recall, of MUSTAN1 and MUSTAN2 compared to baselines across all the video categories of CD2014 dataset [37]. Method F1 Score Precision Recall Cascade CNN [38] 0.9209 0.8997 0.9506 BSGAN [41] 0.9339 0.9232 0.9476 BSPVGAN [41] 0.9472 0.9501 0.9544 FgSegNet [22] 0.9770 0.9758 0.9836 FgSegNet(S) [23] 0.9804 0.9751 0.9896 FgSegNet(v2) [23] 0.9847 0.9823 0.9891 MUNet1 [28] 0.9147 0.9414 0.9277 MUNet2 [28] 0.9369 0.9407 0.9454 MUSTAN1 (Ours) 0.9168 0.8659 0.9417 MUSTAN2 (Ours) 0.9514 0.9156 0.9574 To asses the Out-Of-Domain (OOD) performance of our methods along with the benchmarks, we trained models on CDnet2014 and the tested on 7 categories videos in SBI2015. Detailed OOD performance (considering 7 videos of SBI2015 for testing) of our methods is reported in Table 5, Table 6 and Table 7. OOD performance, i.e., in terms of Avg. F1 score, of our methods and competing methods is illustrated in Table 4. As can been seen, proposed methods are superior to benchmarks by some margins on OOD data. Table 6 and Table 7 demonstrate that ISD results in OOD performance gain. 7."
},
{
"url": "http://arxiv.org/abs/2312.11829v1",
"title": "RadOcc: Learning Cross-Modality Occupancy Knowledge through Rendering Assisted Distillation",
"abstract": "3D occupancy prediction is an emerging task that aims to estimate the\noccupancy states and semantics of 3D scenes using multi-view images. However,\nimage-based scene perception encounters significant challenges in achieving\naccurate prediction due to the absence of geometric priors. In this paper, we\naddress this issue by exploring cross-modal knowledge distillation in this\ntask, i.e., we leverage a stronger multi-modal model to guide the visual model\nduring training. In practice, we observe that directly applying features or\nlogits alignment, proposed and widely used in bird's-eyeview (BEV) perception,\ndoes not yield satisfactory results. To overcome this problem, we introduce\nRadOcc, a Rendering assisted distillation paradigm for 3D Occupancy prediction.\nBy employing differentiable volume rendering, we generate depth and semantic\nmaps in perspective views and propose two novel consistency criteria between\nthe rendered outputs of teacher and student models. Specifically, the depth\nconsistency loss aligns the termination distributions of the rendered rays,\nwhile the semantic consistency loss mimics the intra-segment similarity guided\nby vision foundation models (VLMs). Experimental results on the nuScenes\ndataset demonstrate the effectiveness of our proposed method in improving\nvarious 3D occupancy prediction approaches, e.g., our proposed methodology\nenhances our baseline by 2.2% in the metric of mIoU and achieves 50% in Occ3D\nbenchmark.",
"authors": "Haiming Zhang, Xu Yan, Dongfeng Bai, Jiantao Gao, Pan Wang, Bingbing Liu, Shuguang Cui, Zhen Li",
"published": "2023-12-19",
"updated": "2023-12-19",
"primary_cat": "cs.CV",
"cats": [
"cs.CV"
],
"label": "Original Paper",
"paper_cat": "Semantic AND Segmentation AND Image",
"gt": "3D occupancy prediction is an emerging task that aims to estimate the\noccupancy states and semantics of 3D scenes using multi-view images. However,\nimage-based scene perception encounters significant challenges in achieving\naccurate prediction due to the absence of geometric priors. In this paper, we\naddress this issue by exploring cross-modal knowledge distillation in this\ntask, i.e., we leverage a stronger multi-modal model to guide the visual model\nduring training. In practice, we observe that directly applying features or\nlogits alignment, proposed and widely used in bird's-eyeview (BEV) perception,\ndoes not yield satisfactory results. To overcome this problem, we introduce\nRadOcc, a Rendering assisted distillation paradigm for 3D Occupancy prediction.\nBy employing differentiable volume rendering, we generate depth and semantic\nmaps in perspective views and propose two novel consistency criteria between\nthe rendered outputs of teacher and student models. Specifically, the depth\nconsistency loss aligns the termination distributions of the rendered rays,\nwhile the semantic consistency loss mimics the intra-segment similarity guided\nby vision foundation models (VLMs). Experimental results on the nuScenes\ndataset demonstrate the effectiveness of our proposed method in improving\nvarious 3D occupancy prediction approaches, e.g., our proposed methodology\nenhances our baseline by 2.2% in the metric of mIoU and achieves 50% in Occ3D\nbenchmark.",
"main_content": "Introduction 3D occupancy prediction (3D-OP) is a crucial task within the field of 3D scene understanding, which has garnered considerable attention, particularly in the field of autonomous driving (Wang et al. 2023b; Tong et al. 2023; Tian et al. 2023). In contrast to other 3D perception tasks, such as object detection using bounding box representations, 3DOP involves the simultaneous estimation of both the occupancy state and semantics in the 3D space using multi-view images (Tian et al. 2023). This is achieved by leveraging geometry-aware cubes to represent a wide range of objects and background shapes. In the realm of 3D occupancy prediction, remarkable advancements have been achieved thus far. These advance*Work done during an internship at Huawei Noah\u2019s Ark Lab. \u2020Corresponding authors: Xu Yan and Zhen Li. Copyright \u00a9 2024, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. or Camera-based Student (a) Voxel / BEV / logits Distillation or voxel features BEV features (b) Rendering Assisted Distillation logits logits Multi-modal Teacher voxel feature Camera-based Student Multi-modal Teacher Predict and Volume Rendering voxel feature depths depths semantics semantics Figure 1: Rendering Assisted Distillation. (a) Existing methods conduct alignment on features or logits. (b) Our proposed RadOcc method constrains the rendered depth maps and semantics simultaneously. ments have been made possible by adopting a pipeline inspired by Bird\u2019s Eye View (BEV) perception, which utilizes either forward projection (Huang et al. 2021) or backward projection (Li et al. 2022b) techniques for view transformation. This process generates 3D volume features that capture the spatial information of the scene, which are then fed into the prediction head for occupancy predictions. However, relying solely on camera modality poses challenges in accurate prediction due to the lack of geometric perception. To overcome this bottleneck, two mainstream solutions have emerged in the field of BEV perception: 1) integrating geometric-aware LiDAR input and fusing the complementary information of the two modalities (Liu et al. 2023), and 2) conducting knowledge distillation to transfer the complementary knowledge from other modalities to a singlemodality model (Zhou et al. 2023a). As the first solution introduces additional network designs and computational overhead, recent works have increasingly focused on the second solution, aiming to develop stronger single-modal models through distilling multi-modal knowledge. In this paper, we present the first investigation into crossmodal knowledge distillation for the task of 3D occupancy arXiv:2312.11829v1 [cs.CV] 19 Dec 2023 \fprediction. Building upon existing methods in the field of BEV perception that leverage BEV or logits consistency for knowledge transfer, we extend these distillation techniques to aligning voxel features and voxel logits in the task of 3D occupancy prediction, as depicted in Figure 1(a). However, our preliminary experiments reveal that these alignment techniques face significant challenges in achieving satisfactory results in the task of 3D-OP, particularly the former approach introduces negative transfer. This challenge may stem from the fundamental disparity between 3D object detection and occupancy prediction, where the latter is a more fine-grained perception task that requires capturing geometric details as well as background objects. To address the aforementioned challenges, we propose RadOcc, a novel approach that leverages differentiable volume rendering for cross-modal knowledge distillation. The key idea of RadOcc is conducting alignment between rendered results generated by teacher and student models, as Figure 1(b). Specifically, we employ volume rendering (Mildenhall et al. 2021) on voxel features using the camera\u2019s intrinsic and extrinsic parameters, which enables us to obtain corresponding depth maps and semantic maps from different viewpoints. To achieve better alignment between the rendered outputs, we introduce the novel Rendered Depth Consistency (RDC) and Rendered Semantic Consistency (RSC) losses. On the one hand, the RDC loss enforces consistency of ray distribution, which enables the student model to capture the underlying structure of the data. On the other hand, the RSC loss capitalizes on the strengths of vision foundation models (Kirillov et al. 2023), and leverages pre-extracted segments to conduct an affinity distillation. This criterion allows the model to learn and compare semantic representations of different image regions, enhancing its ability to capture fine-grained details. By combining the above constraints, our proposed method effectively harnesses the cross-modal knowledge distillation, leading to improved performance and better optimization for the student model. We demonstrate the effectiveness of our approach on both dense and sparse occupancy prediction and achieve state-of-the-art results on both tasks. In summary, our main contributions are threefold: \u2022 We propose a rendering assisted distillation paradigm for 3D occupancy prediction, named RadOcc. Our paper is the first to explore cross-modality knowledge distillation in 3D-OP and provides valuable insights into the application of existing BEV distillation techniques for this task. \u2022 Two novel distillation constraints, i.e., rendered depth and semantic consistency (RDC & RSC), are proposed, which effectively enhance the knowledge transfer process through aligning ray distribution and affinity matrices guided by vision foundation models. \u2022 Equipped with the proposed methodology, RadOcc achieves state-of-the-art performance on the Occ3D and nuScenes benchmarks for dense and sparse occupancy prediction. Furthermore, we verify that our proposed distillation approach can effectively boost the performance of several baseline models. Related Work Camera-based 3D Perception Camera-based 3D perception has emerged as a significant research focus in the field of autonomous driving, owing to its cost-effectiveness and rich visual attributes. Recent advancements have aimed to integrate multiple tasks into a unified framework by transforming image-based features into a Bird\u2019s Eye View (BEV) space. One mainstream follows the forward projection paradigm proposed in LSS (Philion and Fidler 2020), where multi-view image features are projected onto the BEV plane through predicted depth maps (Huang et al. 2021; Li et al. 2023, 2022a). Another mainstream (i.e., backward projection) draws inspiration from DETR3D (Wang et al. 2022b), which involves using learnable queries and a cross-attention mechanism to extract information from image features (Li et al. 2022b; Lu et al. 2022; Jiang et al. 2023). Although these methods effectively compress information onto the BEV plane, they may lose some of the essential structural details inherent in 3D space. Introducing LiDAR priors through cross-modal knowledge distillation makes them ideal for understanding the structure of 3D scenes while keeping efficiency. 3D Occupancy Prediction The field of 3D occupancy prediction (3D-OP) has garnered significant attention in recent years, with the aim of reconstructing the 3D volumetric scene structure from multi-view images. This area can be broadly classified into two categories based on the type of supervision: sparse prediction and dense prediction. On the one hand, sparse prediction methods utilize LiDAR points as supervision and are evaluated on LiDAR semantic segmentation benchmarks. For instance, TPVFormer (Huang et al. 2023) proposes a tri-perspective view method for predicting 3D occupancy, while PanoOcc (Wang et al. 2023b) unifies the task of occupancy prediction with panoptic segmentation in a coarseto-fine scheme. On the other hand, dense prediction methods are more akin to the Semantic Scene Completion (SSC) task (Song et al. 2017; Yan et al. 2021), with the core difference being whether to consider the area that the camera cannot capture. Recently, several studies focus on the task of dense occupancy prediction and introduce new benchmarks using nuScenes dataset (Caesar et al. 2020) at the same period, such as OpenOccupancy (Wang et al. 2023a), OpenOcc (Tong et al. 2023), SurroundOcc (Wei et al. 2023) and Occ3D (Tian et al. 2023). These works mainly adopt the architecture from BEV perception and use 3D convolution to construct an extra head for occupancy prediction. We find out that concurrent work (Gan et al. 2023) also utilizes volume rendering technique, however, they naively apply rendered results as auxiliary supervision. Still, we first time investigate cross-modal knowledge distillation in this field, and our proposed method can be integrated into arbitrary previous works. Cross-Modal Knowledge Distillation Knowledge distillation has been a popular technique in the field of computer vision since its introduction in (Hinton, \fMulti-modal Input Multiview Images Rendering Assisted Distillation View Transform Occ Decoder Backbone Backbone Fusion Voxel Features Occ Decoder Depth Maps Semantics Rendered Depth and Semantic Consistency Losses Camera Voxel Features Density Volumes Semantic Volumes Volume Feature Fusion Teacher Model Student Model 3D Occupancy Prediction 3D Occupancy Prediction Depth Maps Semantics V olume Rendering Figure 2: Overall framework of RadOcc. It adopts a teacher-student architecture, where the teacher network is a multi-modal model while the student network only takes camera inputs. The predictions of two networks will be utilized to generate rendered depth and semantics through differentiable volume rendering. The newly proposed rendered depth and semantic consistency losses are adopted between the rendered results. Vinyals, and Dean 2015). This technique initially involves compressing a large network (teacher) into a more compact and efficient one (student), while simultaneously improving the performance of the student. Over the years, the effectiveness of knowledge distillation has led to its widespread exploration in various computer vision tasks, including object detection (Dai et al. 2021; Guo et al. 2021; Zhang and Ma 2020), semantic segmentation (Hou et al. 2020; Liu et al. 2019) and other tasks (Yan et al. 2022b; Zhao et al. 2023; Yuan et al. 2022; Zhou et al. 2023b). Recently, knowledge distillation has been introduced into 3D perception tasks for knowledge transfer between models using different modalities. For instance, (Chong et al. 2022) transfers depth knowledge of LiDAR points to a camera-based student detector by training another camera-based teacher with LiDAR projected to perspective view. 2DPASS (Yan et al. 2022a) utilizes multiscale fusion-to-single knowledge distillation to enhance the LiDAR model with image priors. In the field of BEV perception, CMKD (Hong, Dai, and Ding 2022), BEVDistill (Chen et al. 2022) and UniDistill (Zhou et al. 2023a) perform cross-modality distillation in BEV space. Specifically, these methods transform prior knowledge through distillation in feature, relation, and output levels. Although these efforts have greatly enhanced the performance of student models, they cannot achieve satisfactory performance gains when directly applied to the task of 3D occupancy prediction. Methodology Problem Setup 3D occupancy prediction leverages multiview images as input to predict a semantic volume surrounding the egovehicle. Specifically, it takes into account the current multiview images denoted as It = {It 1, ..., It n}, as well as the previous frames It\u22121, ..., It\u2212k, where k represents the number of history frames and n denotes the camera view index. By incorporating this temporal information, the model finally predicts the semantic voxel volume Yt \u2208 {w1, ..., wC+1}H\u00d7W \u00d7Z for the current frame. Here, C + 1 includes C semantic classes with an occupancy state in the scene, while w(\u00b7) represents the voxel grid. Distillation Architecture Framework overview. The overall architecture is illustrated in Figure 2, consisting of teacher and student networks. The teacher network takes both LiDAR and multi-view images as input, while the student network solely utilizes multi-view images. Both branches are supervised by ground truth occupancy, and the distillation constraints are applied between 3D occupancy predictions and rendered results. Camera-based student. Our student network takes multiframe multi-view images as input and first extracts the feature using an image backbone. To leverage the benefits of Bird\u2019s Eye View (BEV) perception, we apply pixel-wise depth estimation on image features and then project them from the perspective view into a 3D volume via the viewtransform operation proposed in (Huang et al. 2021), forming a low-level volume feature. Moreover, to introduce the temporal information in our model, we adopt the technique proposed in (Li et al. 2022a), which dynamically warps and fuses the historical volume feature and produces a fused feature. To obtain more fine-grained predicted shapes, the volume feature is fed into an occupancy decoder to generate the prediction. Multi-modal teacher. Inspired by LiDAR-based detectors presented in (Shi et al. 2020), the unstructured point clouds are scattered into pillars (Lang et al. 2019). Subsequently, the volume features are extracted by SECOND and SECOND-FPN (Yan, Mao, and Li 2018). Building upon the success of LiDAR-camera-based BEV detectors, as presented in (Liu et al. 2023), we further concatenate features from two modalities and process the result with a fully convolutional network to produce the fused features. Finally, a similar occupancy decoder is applied to the fused feature, resulting in the prediction of occupancy. \fView Image Rendered Depth (T) Rendered Depth (S) Disparity Map Ray Distribution (T) Ray Distribution (S) Figure 3: The analysis of rendered depths. Although the rendered depths of teacher (T) and student (S) are similar, especially for the foreground objects, their ray termination distribution shows a great disparity. Rendering Assisted Distillation Volume rendering. In this paper, we adopt the volume rendering technique as proposed in NeRF (Mildenhall et al. 2021) to obtain depth and semantic maps for knowledge distillation. By incorporating camera intrinsic and external parameters, we are able to compute the corresponding 3D ray for each pixel in the 2D image. After that, we employ the volume rendering technique to perform a weighted sum on the sampled points along the ray, thereby calculating the predicted depths and semantics in perspective views. Given Np sampled points {pi = (xi, yi, zi))}Np i=1 along the ray in pixel (u, v), the rendered depth \u02c6 d and semantic logits \u02c6 s at this pixel can be calculated via Ti = exp( Xi\u22121 j=1 \u03c3(pj)\u03b4j), (1) \u02c6 d(u, v) = XNp i=1 Ti(1 \u2212exp(\u2212\u03c3(pi)\u03b4i))d(pi), (2) \u02c6 s(u, v) = XNp i=1 Ti(1 \u2212exp(\u2212\u03c3(pi)\u03b4i))s(pi), (3) where d(\u00b7), \u03c3(\u00b7) and s(\u00b7) are distance, volume density and semantic of the sampled point, respectively. Since the occupancy network will predict the occupancy probability and semantics, we can easily obtain \u03c3(pi) and s(pi) by scattering the voxel predictions into the corresponding sampled point pi. Moreover, \u03b4i = d(pi+1) \u2212d(pi) is the distance between two adjacent sampled points. Finally, we obtain depth and semantic maps in i-th perspective view through collecting results from all pixels, i.e., Si = {\u02c6 s(u, v) | u \u2208[1, H], v \u2208 [1, W]} and Di = { \u02c6 d(u, v) | u \u2208[1, H], v \u2208[1, W]}, where (H, W) is the size of view image. To facilitate the definition, View Image VFM Shape Segments Segment Grouping Rendered Semantic Affinity Matrix Figure 4: The generation of affinity matrix. We first adopt visual foundation model (VFM), i.e., SAM, to extract segments into the original image. After that, we conduct segment grouping in rendered semantic features in each segment, obtaining the affinity matrix. we respectively denote rendered depth and semantics results from teacher and student as DT/S = {DT/S 1 , ..., DT/S n } and ST/S = {ST/S 1 , ..., ST/S n }, where n is the number of views. Rendered depth consistency. After acquiring the rendered depth, a simplistic approach involves directly imposing constraints between the output of teacher and student models. However, this approach is a hard constraint, and the differences in rendered depths between the teacher and student models are typically within a narrow range. To address this issue, we propose an innovative approach that aligns the ray termination distribution during the volume rendering process. As shown in Figure 3, we plot ray distribution over the distance traveled by the ray. Although the rendered depths of the two models are quite similar, their ray distribution shows a great discrepancy. When a ray traverses through single objects (the red point), we find that the ray termination distribution of the teacher model is typically unimodal, while that of the student exists multiple peaks. Aligning this distribution makes the student model tend to predict a similar latent distribution as the teacher model. Finally, rendered depth consistency (RDC) loss Lrdc is formulated as R(\u00b7) (u,v) = {Ti(1 \u2212exp(\u2212\u03c3(pi)\u03b4i))}Np i=1, (4) Lrdc = 1 HW H X u=1 W X v=1 DKL(Rteacher (u,v) ||Rstudent (u,v) ). (5) Here, Ti is calculated as Eqn. (1). The notation Rteacher and Rstudent respectively denote the ray distribution of the teacher and student networks, which are aligned through KL divergence DKL(\u00b7||\u00b7). Rendered semantic consistency. Besides simply using KL divergence to align the semantic logits, we also leverage the strengths of vision foundation models (VFMs) (Kirillov et al. 2023) to perform a segment-guided affinity distillation (SAD). Specifically, we first employ the VFM to oversegment patches using the original view images as input, as illustrated in Figure 4. With the rendered semantic features from both the teacher and student networks, i.e., ST , \fMethod Image Backbone mIoU \u25a0others \u25a0barrier \u25a0bicycle \u25a0bus \u25a0car \u25a0const. veh. \u25a0motorcycle \u25a0pedestrian \u25a0traffic cone \u25a0trailer \u25a0truck \u25a0drive. suf. \u25a0other flat \u25a0sidewalk \u25a0terrain \u25a0manmade \u25a0vegetation Performances on Validation Set MonoScene R101-DCN 6.06 1.75 7.23 4.26 4.93 9.38 5.67 3.98 3.01 5.90 4.45 7.17 14.91 6.32 7.92 7.43 1.01 7.65 CTF-Occ R101-DCN 28.53 8.09 39.33 20.56 38.29 42.24 16.93 24.52 22.72 21.05 22.98 31.11 53.33 33.84 37.98 33.23 20.79 18.00 BEVFormer R101-DCN 39.24 10.13 47.91 24.90 47.57 54.52 20.23 28.85 28.02 25.73 33.03 38.56 81.98 40.65 50.93 53.02 43.86 37.15 PanoOcc R101-DCN 42.13 11.67 50.48 29.64 49.44 55.52 23.29 33.26 30.55 30.99 34.43 42.57 83.31 44.23 54.40 56.04 45.94 40.40 BEVDet\u2020 Swin-B 42.02 12.15 49.63 25.10 52.02 54.46 27.87 27.99 28.94 27.23 36.43 42.22 82.31 43.29 54.62 57.90 48.61 43.55 Baseline (ours) Swin-B 44.14 13.39 52.20 31.43 52.01 56.70 30.66 32.95 31.56 31.31 39.87 44.64 82.98 44.97 55.43 58.90 48.43 42.99 RadOcc (ours) Swin-B 46.06 9.78 54.93 20.44 55.24 59.62 30.48 28.94 44.66 28.04 45.69 48.05 81.41 39.80 52.78 56.16 64.45 62.64 Teacher (ours) Swin-B 49.38 10.93 58.23 25.01 57.89 62.85 34.04 33.45 50.07 32.05 48.87 52.11 82.9 42.73 55.27 58.34 68.64 66.01 Performances on 3D Occupancy Prediction Challenge BEVFormer R101-DCN 23.70 10.24 36.77 11.70 29.87 38.92 10.29 22.05 16.21 14.69 27.44 33.13 48.19 33.10 29.80 17.64 19.01 13.75 SurroundOcc\u2020 R101-DCN 42.26 11.7 50.55 32.09 41.59 57.38 27.93 38.08 30.56 29.32 48.29 38.72 80.21 48.56 53.20 47.56 46.55 36.14 BEVDet\u2020 Swin-B 42.83 18.66 49.82 31.79 41.90 56.52 26.74 37.31 30.01 31.33 48.18 38.59 80.95 50.59 53.87 49.67 46.62 35.62 PanoOcc-T\u22c6 Intern-XL 47.16 23.37 50.28 36.02 47.32 59.61 31.58 39.59 34.58 33.83 52.25 43.29 83.82 55.81 59.41 53.81 53.48 43.61 Baseline-T (ours) Swin-B 47.74 22.88 50.74 41.02 49.39 55.40 33.41 45.71 38.57 35.79 48.94 44.40 83.19 52.26 59.09 55.83 51.35 43.54 RadOcc-T (ours) Swin-B 49.98 21.13 55.17 39.31 48.99 59.92 33.99 46.31 43.26 39.29 52.88 44.85 83.72 53.93 59.17 55.62 60.53 51.55 Teacher-T (ours) Swin-B 55.09 25.94 59.04 44.93 57.95 63.70 38.89 52.03 53.21 42.16 59.90 50.45 84.79 55.70 60.83 58.02 67.66 61.40 Table 1: 3D occupancy prediction performance on the Occ3D. \u2020 denotes the performance reproduced by official codes. \u22c6means the results provided by authors. \u2018-T\u2019 represents results through test-time augmentation (TTA). Please note that our visual model achieves a benchmark ranking of Top-4 on 16/08/2023, outperforming all previously published methods. SS \u2208RH\u00d7W \u00d7C, we can divide the rendered semantics into several groups based on the indices of aforementioned patches. After that, an average pooling function is applied within each group, extracting multiple teacher and student semantic embedding, i.e., ET \u2208RM\u00d7C and ES \u2208RM\u00d7C. Here, M is the number of patches generated by the VFM. Inspired but different from the previous work (Hou et al. 2022), we calculate an affinity matrix C(\u00b7) according to the above segments for the further distillation: Ci,j,r = E(i, r), E(j, r) ||E(i)||2||E(j)||2 . (6) The affinity score captures the similarity of each segment of semantic embedding and it can be taken as the high-level structural knowledge to be learned by the student. After that, the final RSC loss is a linear combination of affinity distillation loss and KL divergence between rendered semantics: Lsad = C X r=1 M X i=1 M X j=1 ||CT i,j,r \u2212CS i,j,r||2 2, (7) Lrsc = Lsad/CM 2 + \u03c9DKL(ST ||SS), (8) where CT and CS are affinity matrices of teacher and student networks, and \u03c9 is a hyperparameter in our experiment. Experiments Dataset ane Metric Dataset. We evaluate our proposed method on nuScenes (Caesar et al. 2020) for sparse prediction and Occ3D (Tian et al. 2023) for dense prediction. The data descriptions are provided in supplementary material. Evaluation metrics. Our study presents an independent evaluation of the model\u2019s performance in both dense and sparse prediction tasks. Specifically, for dense prediction, we conduct experiments on the Occ3D dataset, which quantifies the mean Intersection over Union (mIoU) for 17 semantic categories within the camera\u2019s visible region. On the other hand, for sparse prediction, we train the model with single-sweep LiDAR and assess the model\u2019s performance on the nuScenes-lidarseg benchmark, which measures the mIoU for 16 semantic categories, with the \u2018others\u2019 category being treated as \u2018ignored\u2019. Experimental Settings Implementation. For the dense prediction, we follow the setting of BEVDet (Huang et al. 2021) and use Swin Transformer (Liu et al. 2021) as the image backbone. We adopt the semantic scene completion module proposed in (Yan et al. 2021) as our occupancy decoder, which contains several 3D convolutional blocks to learn a local geometry representation. Afterward, the features from different blocks are concatenated to aggregate information. Finally, a linear projection is utilized to map the feature into C + 1 dimensions. Since the challenging nature of the Occ3D test benchmark, \fMethod Input Modality Image Backbone mIoU \u25a0barrier \u25a0bicycle \u25a0bus \u25a0car \u25a0const. veh. \u25a0motorcycle \u25a0pedestrian \u25a0traffic cone \u25a0trailer \u25a0truck \u25a0drive. suf. \u25a0other flat \u25a0sidewalk \u25a0terrain \u25a0manmade \u25a0vegetation PolarNet LiDAR 69.4 72.2 16.8 77.0 86.5 51.1 69.7 64.8 54.1 69.7 63.5 96.6 67.1 77.7 72.1 87.1 84.5 Cylinder3D LiDAR 77.2 82.8 29.8 84.3 89.4 63.0 79.3 77.2 73.4 84.6 69.1 97.7 70.2 80.3 75.5 90.4 87.6 2DPASS LiDAR 80.8 81.7 55.3 92.0 91.8 73.3 86.5 78.5 72.5 84.7 75.5 97.6 69.1 79.9 75.5 90.2 88.0 TPVFormer Camera R50-DCN 59.2 65.6 15.7 75.1 80.0 48.8 43.1 44.3 26.8 72.8 55.9 92.3 53.7 61.0 59.2 79.7 75.6 BEVDet\u2020 Camera Swin-B 65.2 31.3 63.9 74.6 79.1 51.5 59.8 63.4 56.2 74.7 59.8 92.8 61.4 69.5 65.7 84.1 82.9 TPVFormer (BL) Camera R101-DCN 69.4 74.0 27.5 86.3 85.5 60.7 68.0 62.1 49.1 81.9 68.4 94.1 59.5 66.5 63.5 83.8 79.9 RadOcc (ours) Camera R101-DCN 71.8 49.1 34.2 84.5 85.8 59.2 70.3 71.4 62.5 79.7 69.0 95.4 66.2 75.1 72.0 87.4 86.0 Teacher (ours) Cam+Li R101-DCN 75.2 62.7 33.2 88.7 88.8 64.6 78.1 74.1 65.0 83.1 72.2 96.5 68.3 77.6 74.4 88.7 87.1 Table 2: LiDAR semantic segmentation results on nuScenes test benchmark. \u2020 denotes the performance is reproduced by official codes. Our method achieves state-of-the-art performance in camera-based methods. BL denotes the baseline method. we utilize 8 historical frames for temporal encoding and use 3 frames on the validation set. For the sparse prediction, we use previous art TPVFormer (Huang et al. 2023) as our baseline. The rendered size of the network is configured to 384 \u00d7 704. To speed up the rendering and reduce memory usage, we randomly sample 80,000 rays during each step. Results and Analysis Dense Prediction. To evaluate the performance of dense 3D occupancy prediction, we compare our proposed method with current state-of-the-art approaches on the Occ3D dataset (Tian et al. 2023), including the validation set and online benchmark. The upper part of Table 1 presents the validation set results, where all methods are trained for 24 epochs. Specifically, we compare our approach with MonoScene (Cao and de Charette 2022), BEVFormer (Li et al. 2022b), CTF-Occ (Tian et al. 2023) and PanoOcc (Wang et al. 2023b), which all employ the ResNet101-DCN (Dai et al. 2017) initialized from FCOS3D (Wang et al. 2021) checkpoint as the image backbone. Additionally, we report the results of BEVDet (Huang et al. 2021) that uses the same image backbone as ours. Our baseline model, trained from scratch, already outperforms prior state-of-the-art methods. However, by leveraging our proposed distillation strategy, we achieve significantly better occupancy results in terms of mIoU. The lower part of Table 1 presents the results on the 3D occupancy prediction challenge, where our proposed method achieves state-of-the-art performance and outperforms all previously published approaches by a large margin. Note that though PanoOcc (Wang et al. 2023b) adopts a stronger image backbone, i.e., InternImage-XL (Wang et al. 2022a), the results of them are still lower than ours, especially for the foreground objects with challenge nature. The visualization results for both dense and sparse prediction are shown in Figure 5. More visualization results can be found in the supplementary material. Sparse Prediction. To evaluate the effectiveness of model using sparse LiDAR supervision, we evaluate the perforMethod Consistency mIoU Gains BEVDet (baseline) 36.10 Hinton et al. Prob. 37.00 +0.90 Hinton et al. Feature 35.89 -0.21 BEVDistill Prob. + Feature 35.95 -0.15 RadOcc (ours) Render 37.98 +1.88 RadOcc (ours) Prob. + Render 38.53 +2.43 Table 3: Comparison for knowledge distillation. The results are obtained on Occ3D. To speed up the evaluation, we take BEVDet (Huang et al. 2021) with ResNet50 image backbone as our baseline. \u2020: Since there is no object-level prediction, we replace the sparse distillation of BEVDistill (Chen et al. 2022) with logits distillation. mance of our proposed RadOcc model on the nuScenes LiDAR semantic segmentation benchmark. Our results, as shown in Table 2, demonstrate a significant improvement over the baseline TPVFormer (Huang et al. 2023) and outperform previous camera-based occupancy networks such as BEVDet (Huang et al. 2021). Surprisingly, our method even achieves comparable performance with some LiDARbased semantic segmentation methods (Zhang et al. 2020; Zhou et al. 2020). It should be noted that since we use voxelized single-sweep LiDAR as supervision, where the geometric details in data may be lost during the voxelization, the results of a multi-modal teacher network may not achieve comparable performance with state-of-the-art LiDAR-based methods (Yan et al. 2022a). Comparison for knowledge distillation. To further validate the efficacy of our proposed methodology upon previous teacher-student architectures, we conduct a comparative analysis of RadOcc with conventional knowledge transfer techniques as presented in Table 3. To facilitate the experimentation process, we choose BEVDet (Huang et al. 2021) with ResNet50 image backbone as our baseline, and all methods are trained with the same strategies for a fair comparison. The results in the table indicate that direct ap\fRadOcc (Ours) Multi-view Images (a) Dense 3D Occupancy Prediction (b) Sparse 3D Occupancy Prediction barrier bicycle bus car c. v. motor. ped. t. c. trailer truck d. s. flat sidewalk terrain manmade veg. Figure 5: Qualitative results on Occ3D and nuScenes validation sets. RadOcc takes multi-view images as input and produces voxel predictions. More visualization comparisons can be found in the supplementary materials. Method RDC(-) RDC SAD RSC mIoU BEVDet 36.10 Model A \u2713 35.08 Model B \u2713 36.76 Model C \u2713 37.13 Model D \u2713 37.42 RadOcc (ours) \u2713 \u2713 37.98 Table 4: Ablation study on Occ3D. We use BEVDet with ResNet50 image backbone as our baseline. Here, RDC and RSC are rendered depth and semantic consistency losses. RDC (-) denotes directly aligning the rendered depth map with Scale-Invariant Logarithmic loss. plication of feature and logits alignment (Hinton, Vinyals, and Dean 2015; Chen et al. 2022) fails to achieve a significant boost on the baseline model, particularly for the former, which results in negative transfer. Notably, leveraging rendering-assisted distillation leads to a substantial improvement of 2% on mIoU. Furthermore, even when applying logit distillation, the model can still enhance the mIoU by 0.6%. Ablation study. We conduct an ablation study of rendering distillation in Table 4. Here, BEVDet with ResNet50 image backbone is selected as our baseline model. Model A directly conducts alignment through Scale-Invariant Logarithmic (Eigen, Puhrsch, and Fergus 2014) on rendered depth maps but fails to improve the performance. In contrast, Model B aligns the latent distribution of depth rendering and achieves an improvement of 0.7% in mIoU. On the other hand, Model C demonstrates the results sorely using segment-guided affinity distillation (SAD) on rendered semantics, which increases the mIoU by 1.0%. Applying adMethod Segment mIoU Gains BEVDet w/ RSC SAM 37.42 Model E Super Pixel 37.05 -0.37 Table 5: Design analysis of SAD. We replace the segment extraction strategy with other designs. ditional KL divergence between two rendered semantics can boost the performance to 37.42%. Finally, when we combine RDC and RSC losses, the model achieves the best result. In Table 5, we analyze the design of SAD by replacing its segment with other implementations in Model E. Specifically, when we use super-pixel (Achanta et al. 2012), the performance will decrease by about 0.37%."
}
]
}