diff --git "a/intro_28K/test_introduction_long_2405.05791v1.json" "b/intro_28K/test_introduction_long_2405.05791v1.json"
new file mode 100644--- /dev/null
+++ "b/intro_28K/test_introduction_long_2405.05791v1.json"
@@ -0,0 +1,103 @@
+{
+ "url": "http://arxiv.org/abs/2405.05791v1",
+ "title": "Sequential Amodal Segmentation via Cumulative Occlusion Learning",
+ "abstract": "To fully understand the 3D context of a single image, a visual system must be\nable to segment both the visible and occluded regions of objects, while\ndiscerning their occlusion order. Ideally, the system should be able to handle\nany object and not be restricted to segmenting a limited set of object classes,\nespecially in robotic applications. Addressing this need, we introduce a\ndiffusion model with cumulative occlusion learning designed for sequential\namodal segmentation of objects with uncertain categories. This model\niteratively refines the prediction using the cumulative mask strategy during\ndiffusion, effectively capturing the uncertainty of invisible regions and\nadeptly reproducing the complex distribution of shapes and occlusion orders of\noccluded objects. It is akin to the human capability for amodal perception,\ni.e., to decipher the spatial ordering among objects and accurately predict\ncomplete contours for occluded objects in densely layered visual scenes.\nExperimental results across three amodal datasets show that our method\noutperforms established baselines.",
+ "authors": "Jiayang Ao, Qiuhong Ke, Krista A. Ehinger",
+ "published": "2024-05-09",
+ "updated": "2024-05-09",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV"
+ ],
+ "label": "Original Paper",
+ "paper_cat": "Diffusion AND Model",
+ "gt": "Robots often encounter unfamiliar objects in ever-changing unstructured environments such as warehouses or homes [31]. These scenarios require systems capable of manipulating objects based on their complete shape and occlusion relationships rather than their visibility or category [2, 7, 33]. However, most state-of-the-art amodal segmentation methods [1, 8, 15, 32], which are usually constrained by the need for class-specific data, struggle to generalize to unseen objects and are susceptible to misclassification. Diffusion probabilistic models specialize in capturing and reproducing complex data dis- tributions with high fidelity [11], making them well-suited for generating the invisible parts of unknown objects. In contrast to traditional convolutional networks that often struggle with the complexity of occlusions [10, 27], diffusion models proficiently reconstruct ob- jects through their iterative refinement process. This process is particularly advantageous for inferring occluded object regions, as it progressively recovers the occluded parts based on visible context and learned possible object shapes. Additionally, while current amodal segmentation methods typically overlook the uncertainty in the shape of the hidden part, diffusion models inherently sample from the learned distribution [25, 38], providing multi- ple plausible hypotheses for the occluded shape. Given these capabilities, diffusion models present a fitting approach for advancing the field of amodal segmentation. We introduce a novel diffusion model for sequential amodal segmentation that does not rely on object categories. Our approach transcends traditional single or dual-layer prediction limitations [12, 17, 22] by enabling the simultaneous segmentation of unlimited object layers in an image. In addition, our framework generates multiple plausible amodal masks for each arXiv:2405.05791v1 [cs.CV] 9 May 2024 2 Layer 1 Layer 2 Ground Truth Prediction Cumulative Mask Image Layer 3 Layer 4 Image Layer 3 Layer 4 Ground Truth Prediction Image Layer 3 L Ground Truth Prediction Layer 4 Ground Truth Pr Layer 3 Figure 1: The cumulative mask and amodal mask predictions for an input image. Our method can generate reliable amodal masks layer by layer and allows multiple objects per layer. object from a single input image, contrasting with prior approaches that depend on multiple ground truths to achieve varied results [9, 25, 34]. Tailored to the amodal task, our method requires only a single ground truth per object during training to capture the diversity of occlusions, overcoming the limitations of existing amodal datasets that typically provide only one annotation per object and neglect the variability in invisible regions. Our framework takes an RGB image as input and sequentially predicts the amodal masks for each object, as illustrated in Fig. 1. The iterative refinement process of our proposed algorithm, inspired by human perception mechanisms for invisible regions [28], leverages preceding identified items to infer subsequent occluded items. Specifically, it employs a cu- mulative mask, which aggregates the masks of previously identified objects. This strategy allows the model to maintain a clear record of areas already segmented, directing its focus to- ward unexplored regions. By focusing the prediction effort on uncertain or occluded regions, our approach improves the accuracy and reliability of the amodal segmentation process. We validate our approach through comprehensive ablation studies and performance bench- marking across three amodal datasets, demonstrating its superiority in handling complex sequential amodal segmentation challenges. The main contributions of our work are: \u2022 A new sequential amodal segmentation method capable of predicting unlimited layers of occlusion, enabling occlusion modelling in complex visual scenes. \u2022 Occluded shape representation which is not based on labelled object categories, en- hancing its applicability in diverse and dynamic settings. \u2022 A diffusion-based approach to generating amodal masks that captures the uncertainty over occluded regions, allowing for diverse segmentation outcomes.",
+ "main_content": "1 The University of Melbourne Parkville, 3010, Australia 2 Monash University Clayton, 3800, Australia Abstract To fully understand the 3D context of a single image, a visual system must be able to segment both the visible and occluded regions of objects, while discerning their occlusion order. Ideally, the system should be able to handle any object and not be restricted to segmenting a limited set of object classes, especially in robotic applications. Addressing this need, we introduce a diffusion model with cumulative occlusion learning designed for sequential amodal segmentation of objects with uncertain categories. This model iteratively refines the prediction using the cumulative mask strategy during diffusion, effectively capturing the uncertainty of invisible regions and adeptly reproducing the complex distribution of shapes and occlusion orders of occluded objects. It is akin to the human capability for amodal perception, i.e., to decipher the spatial ordering among objects and accurately predict complete contours for occluded objects in densely layered visual scenes. Experimental results across three amodal datasets show that our method outperforms established baselines. The code will be released upon paper acceptance. Amodal segmentation with order perception requires segmentation of the entire objects by including both visible and occluded regions while explicitly resolving the layer order of 3 all objects in the image. Establishing layering of objects allows for a comprehensive understanding of the scene and the spatial relationships between objects, which is essential for tasks such as autonomous driving, robot grasping, and image manipulation [2, 14, 40]. Current amodal segmentation methods mainly assess occlusion states of individual objects [6, 22, 26, 30] or between pairs [2, 12, 37], but tend to ignore the global order in a complex scene, such as the relationship between independent groups. While some work [1, 40] has begun to address amodal segmentation with perceptible order, they fall short for class-agnostic applications due to design constraints on category-specific dependencies. Class-agnostic segmentation aims to detect masks without relying on pre-learned categoryspecific knowledge. It is vital for scenarios where comprehensive labelling is resourceintensive or when encountering unseen categories [23, 31]. However, amodal segmentation approaches usually depend on predefined class labels and thus have limited ability to handle unknown objects [15, 19]. While there are a few methods which consider the class-agnostic amodal segmentation, [2] is for RGB-D images with depth data rather than RGB images, [5] relies on the bounding box of the object as an additional input to predict amodal masks, [41] treats amodal masks prediction and ordering as separate tasks thus designs the methods individually, and other requires additional inputs for prediction such as visible mask [20, 39] Segmentation with diffusion models has recently attracted interest as its ability to capture complex and diverse structures in an image that traditional models might miss [4, 16, 35, 36]. Particularly in medical imaging, diffusion models are used to generate multiple segmentation masks to simulate the diversity of annotations from different experts [9, 25, 34, 38]. However, these methods are designed for the visible part of images and do not adequately address the diversity of predictions required for the hidden part of objects. In summary, our approach addresses sequential amodal segmentation with two key improvements: First, a novel segmentation technique capable of globally predicting occlusion orders, offering a comprehensive understanding of object occlusion relationships in a scene. Second, a diffusion-based model to provide diverse predictions for amodal masks, especially for the occluded portions. This model uniquely employs cumulative occlusion learning that utilises all preceding masks to provide vital spatial context, thus boosting its ability to segment occluded objects. 3 Problem Definition Our goal is to amodally segment multiple overlapping objects within an image without object class labels, while determining the occlusion order of these objects. Specifically, the task requires inferring complete segmentation masks of all objects, including both the visible and occluded portions, and assigning a layering order to these segments. For a given RGB image I, the goal of our sequential amodal segmentation approach is two-fold. First, to produce a collection of amodal segmentation masks {Mi}N i=1, where each mask Mi represents the full extent of the corresponding object Oi within the scene\u2014this includes both visible and occluded regions. Second, to assign a layer ordering {Li}N i=1 to these objects based on their mutual occlusions, thereby constructing an occlusion hierarchy. The layer variable Li adheres to the occlusion hierarchy defined by [1]. The bi-directional occlusion relationship Z(i, j) indicates if Oi is occluded by Oj, given by: Z(i, j) = ( 1, if object Oi is occluded by object O j, 0, otherwise. (1) 4 The set Si comprises indices of those objects occluding Oi, is defined by Si = { j|Z(i, j) = 1}. Subsequently, the layer ordering Li for each object Oi is computed based on: Li = ( 1, if Si = / 0, 1+max j\u2208Si Lj, otherwise. (2) The ultimate goal is to derive an ordered sequence of amodal masks \u03c4 = \u27e8M1,...,MN\u27e9 that correctly represents the object layers in image I. 4 Methodology The architecture of our proposed model is shown in Fig. 2. Details on the architectural components, the cumulative guided diffusion model and the cumulative occlusion learning algorithm are discussed in Sections 4.1 and 4.2, respectively. ,QSXW\u0003,PDJH 3UHGLFWLRQV &XPXODWLYH\u00032FFOXVLRQ\u0003/HDUQLQJ ,QSXW\u0003,PDJH &XPXODWLYH\u00030DVN 'LIIXVLRQ\u00030RGHO 'LIIXVLRQ\u00030RGHO 'LIIXVLRQ\u00030RGHO 'LIIXVLRQ\u00030RGHO 2XWSXW \u0011\u0003\u0011\u0003\u0011 \u0011\u0003\u0011\u0003\u0011 \u0011\u0003\u0011\u0003\u0011 Figure 2: Architecture of our model. Our model receives an RGB image as input and predicts multiple plausible amodal masks layer-by-layer, starting with the unoccluded objects and proceeding to deeper occlusion layers. Each layer\u2019s mask synthesis receives as input the cumulative occlusion mask from previous layers, thus providing a spatial context for the diffusion process and helping the model better segment the remaining occluded objects. 4.1 Diffusion-based Framework Denoising diffusion probabilistic models (DDPM) are popular generative models that provide powerful frameworks for learning complex data distributions [11]. Building on the improved DDPMs [21], we introduce a novel approach that extends the capabilities of diffusion models to the domain of amodal segmentation, which involves segmenting visible regions while inferring the shapes of occluded areas. This is distinct from existing diffusion models that focus primarily on visible image features, where additional understanding of occlusion structure in an image makes it a unique challenge. Cumulative mask. We introduce the cumulative mask\u2014a critical innovation that incorporates the spatial structures of objects, facilitating the understanding of both visible and occluded object parts. The cumulative mask aggregates the masks of all objects which are in front of (and potentially occluding) the current layer. Specifically, the cumulative mask for an object Oi with layer order Li encompasses the masks of all objects with a layer order lower than Li, thereby representing the cumulative occlusion up to that layer. For each object Oi with its amodal mask Mi and layer order Li, the cumulative mask CMi is formalized as: CMi = [ { j|L j
1), the mask we selected is more suitable for constructing cumulative masks than using the mean mask directly. Failure analysis. A common challenge arises from errors in sequential prediction, particularly determining which of two objects is in front of the other when the overlapping region is occluded by a third object. This may lead to objects being predicted in incorrect layers, as illustrated in Fig. 4 (b). Synthetic images can amplify this challenge due to fewer spatial cues (such as height in the image plane or scene semantics) to disambiguate occluded object order. Our cumulative occlusion learning mitigates the impact of these errors by considering the cumulative mask for all preceding layers. We demonstrate the robustness of our method to such failures through noise introduction experiments in the next section. 5.4 Noise Introduction Experiment in Cumulative Mask Our model leverages the ground truth cumulative mask as input during training, while inference uses the predicted masks from previous layers to build the cumulative mask, as described in Sec. 4.2. A common idea is to utilize the predicted cumulative mask in training, mirroring the inference setup. However, this complicates the early stages of training, when all of the predicted masks (and thus the cumulative mask) are similar to random noise. To bridge the gap between training and inference, we conducted experiments in which we introduced controlled noise into the cumulative mask during training, to simulate the types of errors which occur during inference. The experiment was designed to mimic common types of inference errors, such as continuous prediction errors due to layer dependencies or over-segmentation due to boundary ambiguity. This was achieved by selectively omitting instances from a random layer in the cumulative mask while keeping the input RGB image and the prediction mask unchanged. These experiments also simulate and seek to understand the impact of sequential prediction errors on the model\u2019s performance. By introducing noise into the cumulative mask during training, we effectively create scenarios where the model must handle instances segmented into the wrong layer, as happens when the model makes sequential prediction errors. 11 Specifically, instances from a randomly chosen layer (excluding the fully visible layer) are excluded from the cumulative mask. Mathematically, selecting a random layer index irand from [2, n], the perturbed version of the cumulative mask, denoted as P, is derived by: P = CM \u2212Mirand (12) Where CM is the original cumulative mask, and Mi is the ground truth mask of the ith layer instance (i \u2208[2,n]). The subtraction here is a pixel-wise binary operation. During training, the model will replace CM with P as input at a specified noise level ratio. Noise 0% 5% 10% 15% 20% 0% 5% 10% 15% 20% Layer AP IOU 1 57.8 51.7 56.6 56.0 57.6 57.1 50.3 55.8 55.3 56.9 2 45.4 37.5 44.1 40.2 40.3 44.8 35.5 43.2 38.8 39.2 3 30.0 24.6 28.0 24.9 23.5 28.8 21.9 26.8 22.4 20.8 4 14.2 10.7 12.1 10.3 9.2 12.2 7.9 10.3 8.0 6.5 5 3.6 3.3 3.4 3.2 2.9 1.9 1.9 2.2 1.7 1.0 Table 3: Comparison at different noise levels, evaluated with AP and IOU. Noise-free training results in the highest AP across the layers, and the highest IOU for the first four layers and the second highest for the fifth layer. Tab. 3 illustrates the model\u2019s performance in terms of AP and IOU across different layers and noise levels. It was observed that the highest AP was achieved with 0% noise for all layers. Similar to AP, the IOU results also showed that the highest performance was generally observed with 0% noise, except for the 5th layer, where a slight increase was noted at 10% noise level. Overall, this suggests that adding noise in training has very limited benefit. On the contrary, training without noise achieves the best performance in terms of AP or IOU in the vast majority of cases. The results of the experiment provide insight into the model\u2019s robustness to errors in the sequential segmentation process and validate the effectiveness of our cumulative occlusion learning approach. By focusing on the cumulative mask for all preceding layers, our approach avoids the cascading effects of sequential prediction errors, ensuring more reliable performance even in complex occlusion scenarios. Despite the theoretical appeal of mimicking inference conditions during training, the results indicate that using ground truth cumulative masks remains the more effective approach. This strategy consistently yielded superior results across most metrics and layers, showing its suitability to our model training process. Based on these findings, our training strategy uses the ground truth cumulative masks. 5.5 Comparisons with Other Methods We benchmark against DIS [34], a leading diffusion-based segmentation method. For comparison, we trained distinct DIS models for each layer under the same iterations and evaluated the segmentation results separately for each layer. Tab. 4 comprehensively compares our method and the improved DIS across different layers on three amodal datasets. The performance of the MUVA dataset after five layers is omitted because the performance of both models approaches zero. The superiority of our method is particularly evident in deeper layers, where our method maintains reasonable performances, whereas DIS shows a marked 12 Layer 1 2 3 4 5 Dataset Method IOU / AP IOU / AP IOU / AP IOU / AP IOU / AP Intra-AFruit DIS 89.5 / 90.7 81.6 / 82.6 52.4 / 52.6 9.8 / 12.4 0.5 / 2.0 Ours 94.3 / 94.7 87.4 / 88.2 76.2 / 77.3 26.7 / 27.6 7.2 / 7.4 ACOM DIS 31.6 / 34.8 26.6 / 28.7 1.6 / 10.2 0.2 / 6.0 0.1 / 2.5 Ours 57.1 / 57.8 44.8 / 45.4 28.8 / 30.0 12.2 / 14.2 1.9 / 3.6 MUVA DIS 68.2 / 71.5 19.3 / 27.3 0.1 / 8.6 0.2 / 3.4 0 / 0.5 Ours 77.0 / 79.3 48.7 / 51.2 25.4 / 27.8 8.5 / 9.9 1.0 / 1.1 Table 4: Comparison with a diffusion-based segmentation model [34] without cumulative occlusion learning. Our method exhibits great improvement in complex, deeper-layer scenes. Dataset Intra-AFruit ACOM MUVA Method Supervision Framework AP w/ Layer AP w/o Layer AP w/ Layer AP w/o Layer AP w/ Layer AP w/o Layer PointRend Supervised CNN-based N/A 70.9 N/A 22.0 N/A 38.9 AISFormer Supervised Transformer-based N/A 70.4 N/A 34.9 N/A 49.7 PLIn Weakly supervised CNN-based 42.2 78.9 3.9 17.0 16.3 47.3 Ours Supervised Diffusion-based 84.6 92.6 45.4 65.5 53.1 55.7 Table 5: Comparison with category-specific segmentation models. PointRend [13], AISFormer [32] and PLIn [1] are trained on category-specific data, whereas our models are trained using class-agnostic data. We evaluate the models by focusing solely on the segmentation quality, disregarding any category information. decline, especially in the MUVA dataset. These results highlight the robustness of cumulative occlusion learning in handling layered occlusions across various datasets, particularly in more complex scenarios involving multiple layers of object occlusion. Due to the lack of class-agnostic amodal segmentation methods with layer perception, we compare against category-specific methods like PLIn for amodal segmentation with occlusion layer prediction [1], AISFormer for amodal segmentation without layer perception [32], and PointRend for modal segmentation [13]. We trained these comparison models using category-labelled amodal masks to meet their requirement for category-specific learning, while our model is trained on data without category labels. For evaluation, we ignore category label accuracy for the comparison models, reporting only segmentation accuracy. We present the AP results considering two scenarios in Tab. 5: with layer prediction, where segmentation precision is contingent on correct layer assignment, and without layer prediction, where segmentation is recognized irrespective of layer placement. Despite being trained on class-agnostic data, our method surpasses category-specific models trained on category-labelled data. Furthermore, Fig. 5 visually demonstrates our method\u2019s superiority in amodal mask segmentation. Our approach provides plausible masks even for heavilyoccluded objects, showcasing its enhanced segmentation capability in complex scenes involving multiple layers of object occlusion. We provide more visualisations of our model\u2019s predictions for the Intra-AFruit [1], MUVA [15] (Fig. 7), (Fig. 6) and ACOM [1] (Fig. 8) test sets. As we can see from the figures, our model performs robustly with different objects and different levels of occlusion. 13 Image Ground Truth (a) Ours (b) DIS (c) CIMD (d) PLIn (e) PointRend Figure 5: Comparison of predictions on Intra-AFruit (top) and MUVA (bottom) test image by (b) DIS [34] (c) CIMD [25] (d) PLIn [1] (e) PointRend [13] and (a) ours, where (b) and (c) are diffusion-based methods. Dashed circles indicate objects that missed being predicted. Others fail to segment objects or provide less plausible amodal masks compared to ours. Layer 1 Layer 2 Layer 3 Layer 4 Cumulative Mask Ground Truth Cumulative Mask Amodal Mask Amodal Mask Prediction Prediction Ground Truth Image (a) Cumulative Mask Ground Truth Cumulative Mask Amodal Mask Amodal Mask Prediction Prediction Ground Truth Image (b) Layer 5 Figure 6: Visualisation of the prediction of our model on the Intra-AFruit [1] test set. Each layer\u2019s amodal mask synthesis receives the cumulative mask of the previous layers as input, thus providing a spatial context for the prediction and helping to segment the remaining occluded objects better. We can see that our model can predict amodal masks and occlusion layers well for multiple objects in a given image. 14 Layer 1 Layer 2 Layer 3 Layer 4 Cumulative Mask Ground Truth Cumulative Mask Amodal Mask Amodal Mask Prediction Prediction Ground Truth Image (a) Cumulative Mask Ground Truth Cumulative Mask Amodal Mask Amodal Mask Prediction Prediction Ground Truth Image (b) Figure 7: Visualisation of the prediction of our model on the MUVA [15] test set. Layer 1 Layer 2 Layer 3 Cumulative Mask Ground Truth Cumulative Mask Amodal Mask Amodal Mask Prediction Prediction Ground Truth Image (a) Cumulative Mask Ground Truth Cumulative Mask Amodal Mask Amodal Mask Prediction Prediction Ground Truth Image (b) Figure 8: Visualisation of the prediction of our model on the ACOM [1] test set. 15 6 Conclusion The task of sequential amodal segmentation is essential for understanding complex visual scenes where objects are frequently occluded. Our proposed method, leveraging cumulative occlusion learning with mask generation based on diffusion models, allows robust occlusion perception and amodal object segmentation over unknown object classes and arbitrary numbers of occlusion layers. We demonstrate in three publicly-available amodal datasets that the proposed method outperforms other layer-perception amodal segmentation and diffusion segmentation methods while producing reasonably diverse results. Future work will aim to augment efficiency and maintain output quality through super-resolution techniques and learned compression methods like VAEs. These advances will optimize our downsampling strategy, enabling a more efficient application to high-resolution datasets.",
+ "additional_info": [
+ {
+ "url": "http://arxiv.org/abs/2404.15275v1",
+ "title": "ID-Animator: Zero-Shot Identity-Preserving Human Video Generation",
+ "abstract": "Generating high fidelity human video with specified identities has attracted\nsignificant attention in the content generation community. However, existing\ntechniques struggle to strike a balance between training efficiency and\nidentity preservation, either requiring tedious case-by-case finetuning or\nusually missing the identity details in video generation process. In this\nstudy, we present ID-Animator, a zero-shot human-video generation approach that\ncan perform personalized video generation given single reference facial image\nwithout further training. ID-Animator inherits existing diffusion-based video\ngeneration backbones with a face adapter to encode the ID-relevant embeddings\nfrom learnable facial latent queries. To facilitate the extraction of identity\ninformation in video generation, we introduce an ID-oriented dataset\nconstruction pipeline, which incorporates decoupled human attribute and action\ncaptioning technique from a constructed facial image pool. Based on this\npipeline, a random face reference training method is further devised to\nprecisely capture the ID-relevant embeddings from reference images, thus\nimproving the fidelity and generalization capacity of our model for ID-specific\nvideo generation. Extensive experiments demonstrate the superiority of\nID-Animator to generate personalized human videos over previous models.\nMoreover, our method is highly compatible with popular pre-trained T2V models\nlike animatediff and various community backbone models, showing high\nextendability in real-world applications for video generation where identity\npreservation is highly desired. Our codes and checkpoints will be released at\nhttps://github.com/ID-Animator/ID-Animator.",
+ "authors": "Xuanhua He, Quande Liu, Shengju Qian, Xin Wang, Tao Hu, Ke Cao, Keyu Yan, Man Zhou, Jie Zhang",
+ "published": "2024-04-23",
+ "updated": "2024-04-23",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV"
+ ],
+ "label": "Original Paper",
+ "paper_cat": "Diffusion AND Model",
+ "gt": "Personalized or customized generation is to create images consistent in style, subject, or character ID based on one or more reference images. In the realm of image generation, considerable strides have been made in crafting this identity-specific content, particularly in the domain of human image synthesis [20, 27, 33, 29]. Recently, text-driven video generation [10, 28, 30] has gathered substantial interest within the research community. These methods enable the creation of videos based on user-specified textual prompts. However, the quest for generating high-fidelity, identity-specific human videos remains an area to explore. The generation of identity-specific human videos holds profound significance, particularly within the film industry, where characters must authentically execute actions. Previous approaches to customization primarily emphasized specified postures [16], styles [22], and action sequences [32], often employing additional control to ensure generated videos met user requirements. However, these methods largely overlook specific identity control. Some techniques involved model fine-tuning through methods like LoRA [15] and textural inversion [7] to achieve ID-specific control [23], but at the expense of substantial training costs and necessitating separate training weights for each ID. Others relied on image prompts to guide the model in generating videos featuring particular subjects, yet encountered challenges such as intricate dataset pipeline construction and limited ID variations [18]. Furthermore, the direct integration of image customization modules [36] into the video generation model resulted in poor quality, such as static motion and ineffective instruction following. As shown in Figure 2, the use of successful image customization methods, IP-Adapter, led to clear failures in following textual descriptions and identity preservation, as well as subtle motion dynamics. The field of ID-specified video generation currently confronts several notable challenges: 1. High training costs: Many ID-specified methods need large training costs, often due to the customization modules with large parameter counts and lack of prior knowledge, consequently imposing significant training overheads. These training costs hinder the widespread adoption and scalability of ID-specified video generation techniques. 2. Scarcity of high-quality text-video paired datasets: Unlike the image generation com- munity, where datasets like LAION-face [39] are readily available, the video generation community lacks sufficient high-quality text-video paired datasets. Existing datasets, such as CelebV-text [37], feature captions annotated with fixed templates that concentrate on emotion changes while ignoring human attributes and actions, making them unsuitable for ID-preserving video generation tasks. This scarcity hampers research progress, forcing many endeavors to resort to collecting private datasets. 3. Influence of ID-irrelevant features from reference images on video generation quality: The presence of ID-irrelevant features in reference images can adversely affect the quality of generated videos. Reducing the influence of such features poses a challenge, demanding novel solutions to ensure fidelity in ID-specified video generation. Solutions To tackle the first issue, we propose an efficient ID-specific video generation frame- work, named ID-Animator, which is composed of a pre-trained text-to-video diffusion model and a lightweight face adapter module. With this design, our module can complete training within a day on a single A100 GPU and can generate 21 frames of video on a single 3090 GPU. To address the second issue, we build an ID-oriented dataset construction pipeline. By leveraging existing publicly available datasets, we introduce the concept of decoupled captions, which involves generating captions for 2 Figure 2: Comparison between the proposed ID-Animator and previous approaches. Directly integrating image customization modules into the video generation model led to poor quality results. human actions, human attributes, and a unified human description. Additionally, we utilize facial recognition, cropping, and other techniques to create corresponding reference images. Trained with the re-written captions, our ID-Animator significantly enhances its effectiveness in following instruc- tions. In response to the third issue, we devise a novel training method for using random face images as references. By randomly sampling faces from the face pool, we decouple ID-independent image content from ID-related facial features, allowing the adapter to focus on ID-related characteristics. Through the aforementioned designs, our model can achieve ID-specific video generation in a lightweight manner. It seamlessly integrates into existing community models [5], showcasing robust generalization and ID-preserving capabilities. Our contribution can be summarized as follows: \u2022 We propose ID-Animator, a novel framework that can generate identity-specific videos given any reference ficial image without model tuning. It inherits pre-trained video diffusion models with a lightweight face adapter to encode the ID-relevant embeddings from learnable facial latent queries. To the best of our knowledge, this is the first endeavor towards achieving zero-shot ID-specific human video generation. \u2022 We develop an ID-oriented dataset construction pipeline to mitigate the missing of training dataset in presonalized video generation. Over publicly available data sources, we present decoupled captioning of human videos, which extracts textual descriptions for human attributes and action respectively to attain comprehensive human captions. Besides, a facial image pool is constructed over this dataset to facilitate the extraction of facial embeddings. \u2022 Over this pipeline, we further devise a random reference training strategy for ID-Animator to precisely extract the identity-relevant features and diminish the influence of ID-irrelevant information inherent in the reference facial image, therefore improving the identity fidelity and generation ability in real-world applications for personalized human video generation.",
+ "main_content": "2.1 Video Generation Video generation has been a key area of interest in research for a long time. Early endeavors in the task utilized models like generative adversarial networks [14, 4, 17] and vector quantized variational autoencoder generate video [19, 8, 24]. However, due to the inherent model ability, this video lacks motion and details and is unable to achieve good results. With the rise of the diffusion model [11], notably the latent diffusion model [25] and its success in image generation, researchers have extended the diffusion model\u2019s applicability to video generation [13, 12, 1]. This technique can be classified into two parts: image-to-video and text-to-video generation. The former essentially transforms a given image into a dynamic video, whereas the latter generates video only following text instructions, 3 without any image as input. Leading-edge methods, exemplified by these works, include Animate Diffusion [10], Dynamicrafter [31], Modelscope [28], AnimateAnything [6], and Stable Video [1], among others. These techniques generally exploit pre-trained text-to-image models and intersperse them with diverse forms of temporal mixing layers. Although these techniques are pushing the boundaries of producing visually appealing videos, there is still a gap in providing user-specific video creation using reference images, like portraits. 2.2 ID Preserving Image Generation The impressive generative abilities of diffusion models have attracted recent research endeavors investigating their personalized generation potential. Current methods within this domain can be divided into two categories, based on the necessity of fine-tuning during the testing phase. A subset of these methods requires the fine-tuning of the diffusion model leveraging ID-specific datasets during the testing phase, representative techniques such as DreamBooth [26], textual inversion [7], and LoRA [15]. While these methods exhibit acceptable ID preservation abilities, they necessitate individual model training for each unique ID, thus posing a significant challenge related to training costs and dataset collection, subsequently hindering their practical applicability. The latest focus of research in this domain has shifted towards training-free methods that bypass additional fine-tuning or inversion processes in testing phase. During the inference phase, it is possible to create a high-quality ID-preserving image with just a reference image as the condition. Methods like Face0 [27] replace the final three tokens of text embedding with face embedding within CLIP\u2019s feature space, utilizing this new embedding as conditional for image generation. PhotoMaker [20], on the other hand, takes a similar approach by stacking multiple images to reduce the influence of ID-irrelevant features. Similarly, IP-Adapter [36] decoupled reference image features and text features to facilitate cross attention, resulting in better instruction following. Concurrently, InstantID [29] combined the features of IP-Adapter and ControlNet [38], utilizing both global structural attributes and the fine-grained features of reference images for the generation of ID-preserving images. Although these methods have yielded promising results, the domain of video generation still remains relatively underexplored. 2.3 Subject Driven Video Generation Research on subject-driven video generation is still in its early stages, with two notable works being VideoBooth [18] and MagicMe [23]. VideoBooth [18] strives to generate videos that maintain high consistency with the input subject by utilizing the subject\u2019s clip feature and latent embedding obtained through a VAE encoder. This approach offers more fine-grained information than ID-preserving generation methods; however, its limitation remains as the subjects required to be present in the training data, such as cats, dogs, and vehicles, which results in a restricted range of applicable subjects. MagicMe [23], on the other hand, is more closely related to the ID-preserving generation task. It learns ID-related representations by generating unique prompt tokens for each ID. However, this method requires separate training for each ID, making it unable to achieve zero-shot training-free capabilities. This limitation poses a challenge for its practical application. Our proposed method distinguishes itself from these two approaches by being applicable to any human image without necessitating retraining during inference. 3 Method 3.1 Overview Given a reference ID image, ID-Animator endeavors to produce high-fidelity ID-specific human videos. Figure 3 demonstrates our methods, featuring three pivotal constituents: a dataset reconstruction pipeline, the ID-Animator framework, and the random reference strategy employed during the training process of ID-Animator. 3.2 ID-Animator As depicted at the bottom of Figure 3, our ID-Animator framework comprises two components: the backbone text-to-video model, which is compatible with diverse T2V models, and the face adapter, which is subject to training for efficiency. 4 Figure 3: An Overview of Our Proposed Framework: The ID-Animator; Dataset Reconstruction Pipeline, and Random Reference Training. Pretrained Text to Video Diffusion Model The pre-trained text-to-video diffusion model exhibits strong video generation prowess, yet it lacks efficacy in the realm of ID-specific human video generation. Thus, our objective is to harness the existing capabilities of the T2V model and tailor it to the ID-specific human video generation domain. Specifically, we employ AnimateDiff [16] as our foundational T2V model. Face Adapter The advent of image prompting has substantially bolstered the generative ability of diffusion models, particularly when the desired content is challenging to describe precisely in text. IP-Adapter [36] proposed a novel method, enabling image prompting capabilities on par with text prompts, without necessitating any modification to the original diffusion model. Our approach mirrors the decoupling of image and text features in cross-attention. This procedure can be mathematically expressed as: Znew = Attention(Q, Kt, V t) + \u03bb \u00b7 Attention(Q, Ki, V i) (1) where Q, Kt, and V t denote the query, key, and value matrices for text cross-attention, respectively, while Ki and V i correspond to image cross-attention. Provided the query features Z and the image features ci, Q = ZWq, Ki = ciW i k, and V i = ciW i v. Only W i k and W i v are trainable weights. Inspired by IP-Adapter, we limit our modifications to the cross-attention layer in the video generation model, leaving the temporal attention layer unchanged to preserve the original generative capacity of the model. A lightweight face adapter module is designed, encompassing a handful of simple query-based image encoder and the cross-attention module with trainable cross-attention projection weights, as shown in Figure 3. The image feature ci is derived from the clip feature of the reference image, and is further refined by the query-based image encoder. The other weights in cross attention 5 Figure 4: Examples of the original Celeve-Caption and our Human Attribute Caption, Human Action Caption and the Unified Human Caption. module are initialized from the original diffusion model, of which the projection weights W i K and W i V are initialized using the weights of the IP-Adapter, facilitating the acquisition of preliminary image prompting capabilities and reducing the overall training costs. Through a simplified and rapid training process, we can attain a video generation model with identity preservation capability. 3.3 ID-Oriented Human Dataset Reconstruction Contrary to identity-preservation image generation tasks, video generation tasks currently suffer from a lack of identity-oriented datasets. The dataset most relevant to our work is the CelebV-HQ [37] dataset, comprising 35,666 video clips that encompass 15,653 identities and 83 manually labeled facial attributes covering appearance, action, and emotion. However, their captions are derived from manually set templates, primarily focusing on facial appearance and human emotion while neglecting the comprehensive environment, human action, and detailed attributes of video. Additionally, its style significantly deviates from the user instructions, rendering it unsuitable for contemporary video generation models, and it lacks facial labels, such as masks and bounding boxes. Consequently, we find it necessary to reconstruct this dataset into an identity-oriented human dataset. Our pipeline incorporates caption rewriting and face detection, coupled with cropping. 3.3.1 Decoupled Human Video Caption Generation To enhance the instruction following ability of ID-Animator, we design a comprehensive restructuring of the captions within the CelebV-HQ dataset. To produce high-quality human videos, it is crucial for the caption to comprehensively encapsulate the semantic information and intricate details present within the video. Consequently, the caption must incorporate detailed attributes of the individual as well as the actions they are performing in the video. In light of this, we employ a novel rewriting technique that decouples the caption into two distinct components: human attributes and human actions. Subsequently, we leverage a language model to amalgamate these elements into a cohesive and comprehensive caption, as illustrated at the top of Figure 3. Human Attribute Caption As a preliminary step, we focus on crafting an attribute caption that aims to vividly depict the individual\u2019s preferences and the surrounding context. To achieve this, we employ the ShareGPT4V [2] model for caption generation. Recognized as a leading tool within the image captioning domain, ShareGPT4V is trained using a dataset generated by GPT4, enabling it to provide detailed descriptions of images. We choose the median frame of the video as the input for ShareGPT4V. This approach allows us to generate detailed character descriptions that incorporate a wealth of attribute information. Human Action Caption Our objective is to create human videos with accurate and rich motions, where a mere human attribute caption is insufficient for our needs. We require a caption that emphasizes the overall dynamism and actions inherent in the video. To address this requirement, we introduce the concept of a human action caption, which strives to depict the action present within the video. These captions are specifically designed to concentrate on the semantic content across the entire video, facilitating a comprehensive understanding of the individual\u2019s actions captured therein. To achieve this goal, we leverage the Video-LLava [21] model, which has been trained on video data and excels at focusing on the overall dynamism. By employing Video-LLava, we ensure that our captions effectively convey the dynamic nature of the actions taking place in the video. 6 Unified Human Caption The limitations of relying solely on human attribute captions and human action captions are demonstrated in Figure 4. Human attribute caption fails to encompass the overall action of the individual, while human action caption neglects the detailed characteristics of the subject. To address this, we designed a unified human caption that amalgamates the benefits of both caption types, using this comprehensive caption to train our model. We employ a Large Language Model to facilitate this integration, capitalizing on its capacity for human-like expression and its prowess in generating high-quality captions. The GPT-3.5 API is utilized in this process. As depicted in Figure 4, the rewritten caption effectively encapsulates the video scene, aligning more closely with human instructions. Figure 4 also illustrates the shortcomings of the CelebV-caption, which deviates from the human instruction distribution and even includes incorrect information (e.g., a young boy is erroneously annotated as a woman). Our method disentangles the video content into attributes and actions, yielding more comprehensive results. 3.3.2 Random Face Extraction for Face Pool Construction In contrast to previous methods [36, 23, 18], our approach does not directly utilize a frame from the video as a reference image. Instead, we opt to extract the facial region from the video, using this as the identity reference image. This strategy effectively reduces the influence of ID-irrelevant features on video generation. Simultaneously, our technique differs from the image reconstruction training strategy employed in the ID preservation image generation works [29, 27, 20], which typically reconstructs a reference image using the same image as condition. Hence, we adopt a more stochastic approach for training with randomrized face extraction. As depicted at the bottom of Figure 3, we employ shuffling on video sqeuences and extract facial region from five randomly selected frames. In instances where a frame contains more than one face, it is discarded and additional frames are selected for re-extraction. The extracted facial images are subsequently stored in the face pool. This stochastic approach of facial extraction enables us to disentangle identity information from the semantic content of the video. 3.4 Random Reference Training For Diminishing ID-Irrelevant Features Prior to presenting our approach, we begin by revising the training methods of existing identity preservation image generation models, thereby highlighting the distinctions between our proposed method and previous research. In the training phase of the Diffusion model, the objective is to estimate the noise \u03f5 at the current time step t from a noisy latent representation zt. This noisy latent zt is derived from the clean latent z combined with the noise component associated with the current time step t, i.e., zt = f(z, t). This optimization procedure can be expressed by the following function: L = Ezt,t,\u03f5 N (0,1)[||\u03f5 \u2212\u03f5\u03b8(zt, t)||2 2] (2) Generally speaking, the clean latent z can be either the image itself or its embedding within the feature space. Specifically, within the context of the latent diffusion model, z originates from the encoding of image I obtained via the VAE encoder. Consequently, this process can be viewed as the reconstruction of z from a given noisy encoding zt. Incorporating conditions such as a text condition C and an image condition Ci, this process can be mathematically expressed as: L = Ezt,t,C,Ci,\u03f5 N (0,1)[||\u03f5 \u2212\u03f5\u03b8(zt, t, C, Ci)||2 2] (3) In current identity preservation image generation models, the image condition Ci and the reconstruction target Z typically originate from the same image I. For instance, Face0 [27], InstantID [29], and FaceStudio [33] utilize image I as the target latent Z, with the facial region of I serving as Ci. Conversely, PhotoMaker [20], Anydoor [3], and IP-Adapter directly employ the feature of image I as Ci. In the learning phase of image reconstruction, this approach provides overly strong conditions for the diffusion model, which not only concentrates on facial features but also encompasses extraneous features such as the background, characters, and adornments (e.g., hats, jewelry, glasses). This may result in the neglect of domain-invariant identity features. When directly applying this technique to videos, we are essentially using the first frame as a guide to recreate the video sequence. This strong conditioning can cause the model to devolve into an image-to-video model, where the video content 7 becomes heavily dependent on the semantic information of the reference image, rather than focusing on its facial embedding. However, character identity should exhibit domain invariance, implying that given images of the same individual from various angles and attire, video generation outcomes should be similar. Therefore, drawing inspiration from the Monte Carlo concept, we designed a random reference training methodology. This approach employs weakly correlated images with the current video sequence as the condition Cj, effectively decoupling the generated content from the reference images. Specifically, during training, we randomly select a reference image from the previously extracted face pool, as depicted in Figure 3. By employing this Monte Carlo technique, the features from diverse reference images are averaged, reducing the influence of identity-invariant features. This transformation of the mapping from (C, Ci)\u2212> Z to (C, Cj)\u2212> Z not only diminishes the impact of extraneous features but also boosts the model\u2019s capacity to follow user instructions. Figure 5: The comparison between our methods and previous methods on three celebrities images. Figure 6: The comparison between our methods and previous methods on three ordinary individuals images 4 Experiment 4.1 Implementation details We employe the open-source AnimateDiff [10] as our text-to-video generation model. Our training dataset is processed by clipping to 16 frames, center cropping, and resizing to 512x512 pixels. During training, only the parameters of the face adapter are updated, while the pre-trained text-to-video model remains frozen. Our experiments are carried out on a single NVIDIA A100 GPU (80GB) with a batch size of 2. We load the pretrained weights of IP-Adapter and set the learning rate to 1e-4 for our trainable adapter. Furthermore, to enhance the generation performance using classifier-free guidance, 8 Figure 7: From top to bottom, our model showcases its ability to recontextualize various elements in an reference image, including human hair, clothing, background, actions, age, and gender. we applied a 20% probability of utilizing null-text embeddings to replace the original updated text embedding. We utilize a subset of the CelebV dataset as our primary dataset, comprising 15k videos, and construct our identity-oriented dataset based on this foundation. Following the filtering of videos containing multiple faces, the final dataset employed for training contains 13k videos. 4.2 Qualitative Comparison We offer a qualitative comparison between our approach and the well-known methods in the domain of ID preserving image generation, specifically, the IP-Adapter-Plus-Face [34] and IP-AdapterFaceID-Portrait [35]. We choose three images of celebrities and three of ordinary individuals as test cases, with the images of the latter being sourced from unused data in the CelebV dataset. We randomly generated six prompts from LLM, maintaining consistency with human language style, thus allowing us to assess the model\u2019s ability to follow instructions. As depicted in Figure 5, it is evident that our approach yields the most desirable outcomes. The face generated by IP-Adapter-Plus-Face demonstrates a certain level of deformation, whereas the IPAdapter-FaceID-Portrait model is deficient in facial structural information, resulting in a diminished similarity between the generated outputs and the reference image. The results presented in Figure 6 further underscore the superiority of our approach, showcasing the most pronounced motion, the highest facial similarity, and the capability of instruction following. 9 Figure 8: The figure illustrates our model\u2019s capability to blend distinct identities and create identityspecific videos. 4.3 Application In this section, We showcase the potential applications of our model, encompassing recontextualization, alteration of age or gender, ID mixing, and integration with ControlNet or community models [5] to generate highly customized videos. 4.3.1 Recontextualization Given a reference image, our model is capable of generating ID fidelity videos and changing contextual information. The contextual information of characters can be tailored through text, encompassing attributes such as features, hair, clothing, creating novel character backgrounds, and enabling them to execute specific actions. As illustrated in Figure 7, we supply reference images and text, and the outcomes exhibit the robust editing and instruction-following capacities of our model. As depicted in the figure 7, from top to bottom, we exhibit the model\u2019s proficiency in altering character hair, clothes, background, executing particular actions, and changing age or gender. 4.3.2 Identity Mixing The potential of our model to amalgamate different IDs is showcased in the figure 8. Through the blending of embeddings from two distinct IDs in varying proportions, we have effectively combined features from both IDs in the generated video. This experiment substantiates the proficiency of our face adapter in learning facial representations. 10 Figure 9: Our model can combine with ControlNet to generate ID-specific videos. Figure 10: From the top to bottom, we visualize the inference results with Lyriel and Raemuxi model weights. 4.3.3 Combination with ControlNet Furthermore, our model demonstrates excellent compatibility with existing fine-grained condition modules, such as ControlNet [38]. We opted for SparseControlNet [9], trained for AnimateDiff, as an additional condition to integrate with our model. As illustrated in Figure 9, we can supply either single frame control images or multi-frame control images. When a single frame control image is provided, the generated result adeptly fuses the control image with the face reference image. In cases where multiple control images are presented, the generated video sequence closely adheres to the sequence provided by the multiple images. This experiment highlights the robust generalization capabilities of our method, which can be seamlessly integrated with existing models. 4.3.4 Inference with Community Models We assessed the performance of our model using the Civitai community model, and our model continues to function effectively with these weights, despite never having been trained on them. The selected models include Lyriel and Raemumxi. As depicted in Figure 10, the first row presents the results obtained with the Lyriel model, while the second row showcases the outcomes achieved using the Raemuxi model. Our method consistently exhibits reliable facial preservation and motion generation capabilities. 5 Conclusion In this research, our primary goal is to achieve ID-specific content generation in text-to-video (T2V) models. To this end, we introduce a ID-Animator framework to drive T2V models in generating ID-specific human videos using ID images. We facilitate the training of our ID-Animator by constructing an ID-oriented dataset based on publicly available resources, incorporating decoupled caption generation and face pool construction. Moreover, we develop a random face reference training method to minimize ID-irrelevant content in reference images, thereby directing the adapter\u2019s focus 11 towards ID-related features. Our extensive experiments demonstrate that our ID-Animator generates stable videos with superior ID fidelity compared to previous models."
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.09227v1",
+ "title": "DreamScape: 3D Scene Creation via Gaussian Splatting joint Correlation Modeling",
+ "abstract": "Recent progress in text-to-3D creation has been propelled by integrating the\npotent prior of Diffusion Models from text-to-image generation into the 3D\ndomain. Nevertheless, generating 3D scenes characterized by multiple instances\nand intricate arrangements remains challenging. In this study, we present\nDreamScape, a method for creating highly consistent 3D scenes solely from\ntextual descriptions, leveraging the strong 3D representation capabilities of\nGaussian Splatting and the complex arrangement abilities of large language\nmodels (LLMs). Our approach involves a 3D Gaussian Guide ($3{DG^2}$) for scene\nrepresentation, consisting of semantic primitives (objects) and their spatial\ntransformations and relationships derived directly from text prompts using\nLLMs. This compositional representation allows for local-to-global optimization\nof the entire scene. A progressive scale control is tailored during local\nobject generation, ensuring that objects of different sizes and densities adapt\nto the scene, which addresses training instability issue arising from simple\nblending in the subsequent global optimization stage. To mitigate potential\nbiases of LLM priors, we model collision relationships between objects at the\nglobal level, enhancing physical correctness and overall realism. Additionally,\nto generate pervasive objects like rain and snow distributed extensively across\nthe scene, we introduce a sparse initialization and densification strategy.\nExperiments demonstrate that DreamScape offers high usability and\ncontrollability, enabling the generation of high-fidelity 3D scenes from only\ntext prompts and achieving state-of-the-art performance compared to other\nmethods.",
+ "authors": "Xuening Yuan, Hongyu Yang, Yueming Zhao, Di Huang",
+ "published": "2024-04-14",
+ "updated": "2024-04-14",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV"
+ ],
+ "label": "Original Paper",
+ "paper_cat": "Diffusion AND Model",
+ "gt": "Recent endeavors in customized 3D content generation [4, 17, 22, 23] have made significant strides by harnessing the impressive ca- pabilities of large-scale pre-trained image generation models like Stable Diffusion [32]. These models extend the remarkable gener- ation ability to the 3D domain through the core concept of Score arXiv:2404.09227v1 [cs.CV] 14 Apr 2024 Conference\u201917, July 2017, Washington, DC, USA Xuening Yuan, Hongyu Yang, Yueming Zhao, Di Huang Distillation Sampling (SDS) [31]. Moreover, by integrating priors such as mesh-based geometry constraints and point cloud diffusion, existing methods for 3D object generation [14, 28] demonstrate the capacity to synthesize corresponding 3D content solely from tex- tual input, exhibiting commendable 3D coherence and high-fidelity details. Nevertheless, the strategy of distilling 2D priors has encoun- tered significant challenges when dealing with texts describing scenes with multiple objects. Existing methods struggle with com- plex arrangements, leading to issues like textual guidance collapse, which fails to capture dense semantic concepts [5, 25, 40], or poor generation quality such as 3D inconsistencies and geometric dis- tortions [42]. Another category is to directly generate complex scene images using Diffusion, then leverage in-painting and depth estimation techniques to lift 2D contents into a 3D representa- tion [6, 29, 35, 44]. However, such methods may fail to accurately capture spatial correlations among multiple instances and the scene background. As a result, the generated 3D scenes lack depth ac- curacy and exhibit noticeable texture mapping effects when the camera moves away from the training trajectory. To address these challenges, several methods for text-to-scene generation [7, 8, 10, 47] have been developed to explicitly model object arrangements in the 3D space. These methods control the positions and transformations of objects through layout or posi- tional proxies. However, they either require users to provide com- plex prompts [7, 47], reducing the flexibility and efficiency of the generation process, or are limited by the drawbacks of their rep- resentations, such as NeRF, which lacks effective control mecha- nisms [7, 10] and high-frequency details. Recently, GALA3D [48] has shifted this paradigm to the 3D Gaussian space, leveraging the efficient and strong representation ability of Gaussian Splat- ting. However, GALA3D does not address background modeling and the generation of pervasive objects, such as rain and snow, which are distributed extensively across the scene. These are crit- ical characteristics that distinguish scene generation from single object generation and should be seriously considered. In this paper, we introduce DreamScape, a novel approach for generating high-fidelity 3D scenes from textual descriptions. DreamScape leverages the strengths of Gaussian Splatting and Large Language Models (LLMs) to enhance 3D fidelity and reduce discrepancies with textual descriptions. Key to DreamScape is the use of a 3D Gaussian Guide (3\ud835\udc37\ud835\udc3a2), which serves as a compre- hensive representation of the scene. This guide, derived from text prompts using LLMs, includes semantic primitives (objects), their spatial transformations, and scene correlations. It enables Dream- Scape to employ a local-global generation strategy, ensuring both instance-level realism and global consistency. DreamScape employs a progressive scale guidance technique during local object generation. This technique considers the scale of each object scale in relation to the overall scene, allowing for more adaptive object generation. At the global level, DreamScape uses a collision loss between objects to prevent intersection and misalignment, addressing the potential spatial biases of 3\ud835\udc37\ud835\udc3a2 pro- vided by LLMs and ensuring physical correctness. This dual-level optimization helps achieve instance-level realism and global con- sistency, enhancing interactions between objects such as water ripples, reflections, and lighting effects. To model pervasive objects like rain and snow, DreamScape introduces sparse initialization and it also incorporates densification and pruning strategies tailored to such objects, resulting in more realistic scenes. Experimental results demonstrate DreamScape\u2019s capability to faithfully generate 3D scenes from textual prompts while preserving semantic informa- tion. The approach achieves superior quality in 3D scene generation and supports various editing capabilities. The contributions of our paper are as follows: \u2022 We present DreamScape, a novel scene generation pipeline based on 3D Gaussian Splatting. The key component, 3\ud835\udc37\ud835\udc3a2, effectively plans the entire scene, initializing scenes and fa- cilitating subsequent local-global 3D Gaussian optimization process. \u2022 A progressive scale constraint allows the model to adjust the scale proportions of objects while ensuring their appear- ance, thus avoiding distortion and stretching in the global optimization stage. \u2022 DreamScape introduces the concept of pervasive objects, proposing sparse initialization and developing correspond- ing densification and pruning strategies for such objects.",
+ "main_content": "3D representation constitutes a pivotal aspect of 3D generation tasks. Existing techniques can be categorized into implicit representations and explicit representations. Implicit representations like NeRF [2, 26, 27] can generate a continuous 3D radiance field, enabling realistic rendering of 3D models from arbitrary viewpoints and distances. Explicit representations like 3D Gaussians [15] start from sparse points generated by structure-from-motion algorithm and utilize a distribution of 3D Gaussians to represent the scene. Employing rasterization rendering techniques, 3D Guassians enables real-time rendering of realistically scenes learned from few image samples. This representation achieves state-of-the-art visual quality within competitive training times. In earlier stages of 3D content generation [13, 31, 39], NeRF emerged as a prevalent choice owing to its robust representation capability. However, following the introduction of 3D Gaussian splatting [15], Gaussian-based method [5, 18, 37, 43, 46, 49] gained popularity due to its superior detail representation, faster rendering speed, and more intuitive control provided by its explicit representation. In this work, we adopt 3D Gaussians as the representation for scenes to facilitate initialization and control ability while enhancing detail fidelity. 2.2 3D Object Generation Existing methods for generating 3D objects from text can be broadly categorized into two types: inference-based methods [11, 14, 28, 34, 36, 41] and optimization-based ones [4\u20136, 20, 25, 31, 37, 40, 42, 43]. Inference-based methods can generate 3D objects with 3D consistency in a relatively short time. Point-E [28] employs a textto-image model to sample images, subsequently utilizing them as conditions for sampling 3D objects using a point cloud diffusion model. Shap-E [14], on the other hand, utilizes a point cloud model to derive an implicit representation through an encoding layer structure, subsequently utilizing a conditional diffusion model to DreamScape Conference\u201917, July 2017, Washington, DC, USA generate 3D objects. While inference-based methods offer rapid processing, they require extensive and diverse 3D model datasets for effective training, and potentially resulting in diminished geometric fidelity of the generated 3D objects. Moreover, creating large-scale 3D model datasets entails significant human and computational resources, and there are challenges related to the diversity and realism of the data. In recent years, the advancement of various 2D image generation models has prompted research endeavors to leverage pre-trained 2D diffusion models for extracting 3D knowledge and generating corresponding 3D asserts. Notably, DreamFusion [31] and SJC [40] introduced Score Distillation Sampling (SDS), leveraging pre-trained text-to-image diffusion models as 2D image priors, demonstrating significant capabilities in synthesizing 3D content from text. Subsequent studies have further enhanced the quality of 3D generation [16, 20, 21, 25, 33]. For instance, Magic3D [20] introduced a coarse-to-fine training method, progressively refining 3D mesh models, while Latent-NeRF [25] integrated text and shape guidance with 3D model generation, utilizing latent disentanglement models directly applying diffusion rendering on 3D meshes. The recent introduction of 3D Gaussian [15] models has invigorated the field of 3D object generation. DreamGaussian [37] combined 3D Gaussians with two-stage geometry and texture optimization, achieving efficient 3D object generation, while GSGEN [5] utilized Point-E [28] to initialize 3D Gaussians and 3D SDS loss for 3D perception, significantly improving the generation effectiveness of 3D objects. Although these methods have demonstrated promising results in generating individual 3D objects, they often lack details when it comes to generating complex scenes with multiple objects. To address this gap, we introduce DreamScape, which not only generates high-quality 3D scenes but also models interactions between objects and scenes. 2.3 3D Scene Generation Current 3D scene generation approaches encounter notable constraints in producing high-quality and controllable 3D scenes. Text2Room [12] generates textured 3D meshes depicting room-scale scenes from textual prompts. Text2NeRF [45] combines diffusion models and NeRF representations, enabling zero-shot generation of diverse indoor and outdoor scenes. Despite their proficiency in generating the geometry of entire rooms, these scene generation method based on image inpainting [8, 12, 29, 45] exhibit deficiencies in individual object modeling and exhibit limited 3D consistency. Object-centric methods [1, 19, 24, 30, 38, 48] based on object assembly generate complex scenes by object composition, but they lack global constraints and struggle to handle interactions between objects and scenes to produce high-quality complex scenes. Set-theScene [7], introduces a proxy-based local-global training framework for 3D scenes synthesis. It can learn detailed representations of each object and simultaneously creates harmonious scenes with matching styles and lighting. However, these method based on object and scene integration [7, 10] requires complex constraint conditions for scene generation and is unable to facilitate flexible editing. Recent advancements in scene generation like GALA3D [48] introduced layout guidance generated by LLMs into scene generation, showcasing promising results. However, it falls short in adequately modeling the interactions among objects within scenes. On the other hand, LucidDreamer [6] initializes point clouds through image prompts and employs them to guide the generation of 3Dconsistent scenes, albeit with limited perspectives in the generated results. Moreover, existing 3D scene generation methods struggle to model pervasive objects such as scattered rain, snow, or petals, making it challenging to generate realistic scenes. In contrast, DreamScape stands out by generating interactive 3D scenes from simple text prompts, effectively balancing usability and controllability in generation. Additionally, DreamScape introduces modeling for \"pervasive object,\" broadening the the representational capabilities and enabling the model to address a wider range of scenarios. 3 METHOD Figure 2 illustrates DreamScape. Beginning with a textual input, DreamScape utilizes LLMs to parse the scene and generate the initial 3D Gaussian Guide (3\ud835\udc37\ud835\udc3a2) of the target scene (Section 3.1). This guide comprises semantic primitives and their spatial transformations, providing a foundational representation of the scene. DreamScape then initializes Gaussians for each object and employs a local-global training strategy to refine the 3D representations. During local optimization, a progressive scale control ensures alignment, while global optimization of the entire 3\ud835\udc37\ud835\udc3a2 is performed to achieve overall scene consistency. To enhance realism, DreamScape introduces a collision training loss (Section 3.2). Additionally, for pervasive objects like rain and snow, DreamScape employs a sparse initialization method, along with densification and pruning operations, to effectively model such objects (Section 3.3). 3.1 3D Gaussian Guide Due to the ambiguous nature of textual prompts, current methods for 3D scene generation often struggle to balance convenience and controllability. They typically rely on intricate shape control [7, 30] or face challenges in generating controllable scenes [6, 42] that accurately match the given descriptions. Recently, methods like GALA3D [48], GraphDreamer [10], and SceneWiz3D [47] have leveraged LLMs to provide prior information about object positions in scenes, yielding promising results. Similarly, we introduce LLMs to offer additional priors for scenes, enabling us to acquire more information than relying solely on the diffusion model without increasing user input. To ensure both high usability and controllability, we define 3\ud835\udc37\ud835\udc3a2 for scene representation. This translates the properties of objects and their correlations from textual prompts into a representation that explicitly guides 3D Gaussian scene generation. Leveraging the interpretation and arrangement abilities of LLMs, 3\ud835\udc37\ud835\udc3a2 captures object distribution and spatial transformations in the target scene, which are crucial for subsequent 3D Gaussian initialization and optimization processes. Specifically, 3\ud835\udc37\ud835\udc3a2 is a set of parameters of the following form: 3\ud835\udc37\ud835\udc3a2 = {(\ud835\udc50\ud835\udc59\ud835\udc60\ud835\udc56,\ud835\udc56\ud835\udc5b\ud835\udc56\ud835\udc61\ud835\udc56,\ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc60\ud835\udc56, \ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc5a\ud835\udc5d\ud835\udc61\ud835\udc56),\ud835\udc56\u2208[1, . . . , \ud835\udc41]}, (1) Conference\u201917, July 2017, Washington, DC, USA Xuening Yuan, Hongyu Yang, Yueming Zhao, Di Huang Text Prompt An astronaut stood under a big tree. Some pink petals were flying in the air, and some petals were piled up on the grass. LLMs Class Info Initialization Info Transformation Info Object Prompt 3D Gaussian Guide Initializer Progressive Scale Control + Diffusion Prior Global Step Local Step Sparse Initialization and Densification + 3D Gaussians + + Renderer Collision Loss of 3DG2 + Figure 2: Overview of our method. Given a text prompt as input, DreamScape first generates 3\ud835\udc37\ud835\udc3a2 corresponding to the text prompt using LLMs to help the model better understand the scene. DreamScape then undergoes local-global training with a frozen Diffusion Prior based on the 3\ud835\udc37\ud835\udc3a2. During training, progressive scale control and synchronization optimization of 3\ud835\udc37\ud835\udc3a2 are employed. Additionally, for pervasive objects, DreamScape utilizes special sparse initialization and densification strategies. The generated 3D content can be rendered from multiple views into coherent images. where \ud835\udc50\ud835\udc59\ud835\udc60\ud835\udc56indicates the category information of the \ud835\udc56-th object, \ud835\udc56\ud835\udc5b\ud835\udc56\ud835\udc61\ud835\udc56is the initialization information of this object, including the initialization method (by either Point-E or sparse initialization), number and color of the initialized points, etc.; \ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc60\ud835\udc56is a tuple of the form (xyz, whl, quad), where xyz \u2208R3 indicates the position of the object center in the scene coordinate system, whl \u2208R3 represents the scale of an object, and quad \u2208R4 is a quadruple representing the rotation of an object; \ud835\udc5d\ud835\udc5f\ud835\udc5c\ud835\udc5a\ud835\udc5d\ud835\udc61\ud835\udc56is the detailed textual description. All of these parameters can be directly generated by LLMs, and the parameters \ud835\udc61\ud835\udc5f\ud835\udc4e\ud835\udc5b\ud835\udc60\ud835\udc56will be further refined during model optimization. Typically, the Point-E [28] initialization method is used to initialize 3D Gaussians from generated 3D point clouds of corresponding objects, ensuring robust 3D consistency and mitigating potential Janus issues [5]. For objects with regular shapes, shape initialization is preferred. Conversely, sparse initialization is selectively employed for pervasive objects, as described in detail in the following sections. In addition, another important utility of the 3\ud835\udc37\ud835\udc3a2 is to provide reliable information regarding object positions, scales, etc. DreamScape adopts 3D Gaussians as the 3D representation of the scene, which is formulated as: \ud835\udc42\ud835\udc56= (p, s, q, c, \ud835\udefc), (2) \ud835\udc46= {3\ud835\udc37\ud835\udc3a2,\ud835\udc42\ud835\udc56,\ud835\udc56\u2208[1, . . . , \ud835\udc41]}, (3) where \ud835\udefcis the opacity, p, s, c \u2208R\ud835\udc41\u00d73 and s \u2208R\ud835\udc41\u00d74 denote the vectors of center position, scale matrix, color and rotation quadruple, as we convert the covariance of the 3D Gaussians into scale matrix and rotation quadruple for easier optimization. A set of Gaussians forms a 3D Gaussian object, and a collection of Gaussian objects along with the 3\ud835\udc37\ud835\udc3a2 of the scene constitute a 3D Gaussian scene. In the local step, the center of an object is located at the center of the rendered area; in the global step, the coordinates need to be converted according to the location arrangements in the 3\ud835\udc37\ud835\udc3a2. During the process of transforming the object from its local coordinate system to the scene coordinate system, the following formula can be used to obtain its new position, rotation, and scale information within the scene: p\u2032 = p \u2217\ud835\udc47\ud835\udf19 \u0002 \ud835\udc38\ud835\udf19(quadi) \u02c6 \u00d7q \u0003 + \ud835\udc38\ud835\udf19(xyzi), (4) s\u2032 = s \u00b7 \ud835\udc38\ud835\udf19(whli), (5) q\u2032 = \ud835\udc38\ud835\udf19(quadi) \u02c6 \u00d7q, (6) where \ud835\udc47\ud835\udf19denotes the transformation from quadruple to rotation matrix, \u02c6 \u00d7 denotes the non-commutative quaternion multiplication, \ud835\udc38\ud835\udf19is the function that extends a vector from the first dimension to a certain length. Similarly, objects can be restored to their original single-object views centered around themselves through the inverse process of the aforementioned formulas. The alpha and color properties of Gaussian points do not require such transformations. 3.2 Scene Optimization Local-global training strategy. Due to the dense semantic concepts and complex colors and geometries, directly distilling the diffusion prior for the entire scene is impractical. Therefore, we adopt a dual-level training strategy for improved results. At the local level, we focus on generating individual objects to enhance details for high fidelity. Then, we collaboratively optimize the entire scene through global steps, to enhance global consistency and capture interactions among objects, rendering effects such as water ripples, reflections, and coordinated lighting. The position conversion of Gaussians between local and global steps can be referred to in formulas 4, 5, and 6. DreamScape Conference\u201917, July 2017, Washington, DC, USA Inspired by related studies, DreamScape utilizes the SDS loss for optimizing 3D content from 2D diffusion prior, as formulated in the following equation: Lsds = E\ud835\udf16,\ud835\udc61 \u0014 \ud835\udf14(\ud835\udc61)(\ud835\udf16\ud835\udf19(\ud835\udc65\ud835\udc61;\ud835\udc66,\ud835\udc61) \u2212\ud835\udf16\ud835\udc61) \ud835\udf15\ud835\udc65 \ud835\udf15\ud835\udf03 \u0015 , (7) where \ud835\udf16\ud835\udc61is the Gaussian noise under timestep \ud835\udc61. The noise predicted by pre-trained diffusion model for timestep \ud835\udc61is denoted as \ud835\udf16\ud835\udf19(\ud835\udc65\ud835\udc61;\ud835\udc66,\ud835\udc61), where \ud835\udc65\ud835\udc61, \ud835\udc66represent the noisy image and the embedded textual prompt, respectively. The rendering process of \ud835\udc65should follow: \ud835\udc65(\ud835\udc5d\ud835\udc65, \ud835\udc5d\ud835\udc66) = \u2211\ufe01 \ud835\udc56\u2208N \ud835\udc50\ud835\udc56\ud835\udefc\ud835\udc56 \ud835\udc56\u22121 \u00d6 \ud835\udc57=1 (1 \u2212\ud835\udefc\ud835\udc57). (8) In the local steps, DreamScape sequentially optimizes each individual object for a 360-degree panoramic view, to ensure 3D consistency of each object. In the global step, DreamScape transforms objects into a unified coordinate system based on the 3\ud835\udc37\ud835\udc3a2, and then refines details according to the viewing perspective to achieve more refined textures and globally consistent interactions among objects. Progressive scale control. In order to align objects with 3\ud835\udc37\ud835\udc3a2, the model stretches objects along all the dimensions for further blending. However, if stretching occurs too early, the object may lose its initial geometric shape, which is detrimental to maintaining the 3D consistency of the object. Conversely, if stretching occurs after the object generation is completed, it may result in distorted textures and geometry, leading to a significant decrease in generation quality. Therefore, we propose progressive scale control, gradually increasing the influence of scale conditions on the appearance of objects during the object generation process, formulated as: \ud835\udefd= whl \u00b7 [\ud835\udc5a\ud835\udc4e\ud835\udc65(xyz) \u2212\ud835\udc5a\ud835\udc56\ud835\udc5b(xyz)]\u22121 , (9) \u0002\u02c6 p \u02c6 s\u0003 = \u0002p s\u0003 \u00b7 \ud835\udc38\ud835\udf19(I + \ud835\udefd\u00b7 min \u0010 max \u0010 (\ud835\udc58\u2212\ud835\udc64) \u00b7 \ud835\udefe\u22121, 0 \u0011 , 1 \u0011 ), (10) where \ud835\udefdis the scale factor, \ud835\udc3cis a vector that has the same shape as \ud835\udefd; \ud835\udc58denote the number of training steps completed for each object. \ud835\udc64represents the warm-up epochs for the object, before which the scale of the object will not be adjusted by 3\ud835\udc37\ud835\udc3a2. \ud835\udefedenotes the saturation step for scale control, after which the scale information of the object will align with 3\ud835\udc37\ud835\udc3a2. With progressive scale control, objects can gradually converge to the scale provided by 3\ud835\udc37\ud835\udc3a2 while maintaining good geometric shapes and texture features. Synchronized optimization of 3\ud835\udc37\ud835\udc3a2. Despite the remarkable understanding capability of current LLMs, there is still a possibility of providing incorrect priors. LLMs may yield conflicting object positions, leading to the phenomenon of object intersection. Therefore, DreamScape sets the 3\ud835\udc37\ud835\udc3a2 as optimizable parameters. During global training, specific object position and scale information will be optimized for 3\ud835\udc37\ud835\udc3a2. The corresponding loss function is defined as follows: Lcross = C\ud835\udf19(\ud835\udc5d\ud835\udc56, \ud835\udc5d\ud835\udc57,\ud835\udf03), \ud835\udc56, \ud835\udc57\u2208[1, . . . , \ud835\udc41]. (11) We define a simple function C\ud835\udf19as a representation of collisions between objects. This function queries the sum of the distance between points that are closer to each other among two objects and filters based on a threshold value \ud835\udf03. DreamScape efficiently implements this functionality using KD-trees [3, 9], avoiding complex computational processes when querying collision situations. Under the constraints of this function, the initialization of collision positions in 3\ud835\udc37\ud835\udc3a2 will be optimized, thereby avoiding instances of objects crossing each other. In particular, due to the particularity of pervasive objects, we do not calculate collision loss for such objects. The overall training loss of our method can be summarized as: L = \ud835\udf061 \ud835\udc41 \u2211\ufe01 \ud835\udc56=1 L\ud835\udc46\ud835\udc37\ud835\udc46_\ud835\udc59\ud835\udc5c\ud835\udc50\ud835\udc4e\ud835\udc59\ud835\udc56+ \ud835\udf062L\ud835\udc50\ud835\udc5f\ud835\udc5c\ud835\udc60\ud835\udc60+ \ud835\udf063L\ud835\udc46\ud835\udc37\ud835\udc46_\ud835\udc54\ud835\udc59\ud835\udc5c\ud835\udc4f\ud835\udc4e\ud835\udc59, (12) where L\ud835\udc46\ud835\udc37\ud835\udc46_\ud835\udc59\ud835\udc5c\ud835\udc50\ud835\udc4e\ud835\udc59, L\ud835\udc46\ud835\udc37\ud835\udc46_\ud835\udc54\ud835\udc59\ud835\udc5c\ud835\udc4f\ud835\udc4e\ud835\udc59are the losses of score distillation in the local and global steps, and L\ud835\udc50\ud835\udc5f\ud835\udc5c\ud835\udc60\ud835\udc60is the collision loss of 3\ud835\udc37\ud835\udc3a2. 3.3 Sparse Initialization and Densification Due to the characteristics of 3D Gaussians in representing objects, existing 3D content generation methods [5, 37, 43] tend to produce dense, surface-floating Gaussians to achieve optimal detail representation. This strategy is not favorable for objects composed of numerous sparse small elements, which would quickly cause sparse objects to stick together. However, these pervasive objects are important for scene composition in some special conditions, including generating snow scenes, floating small petals, and so on. Using multiple objects to represent a pervasive object is undoubtedly resource-wasting. Therefore, DreamScape introduces the concept of \"pervasive object\" to represent objects composed of numerous sparse small elements. For pervasive objects, DreamScape proposes sparse initialization and sparse densification strategies to optimize performance. The effectiveness of employing this strategy is demonstrated through the comparison of rendered images and depth maps, as depicted in Figure 3. Figure 3: For pervasive objects, the regular initialization and densification will lead to the adhesion of a large number of Gaussians, while sparse initialization and densification can effectively avoid this issue. Sparse initialization. Typically, the initialization of an object involves a large number of Gaussian points to ensure a solid 3D appearance, avoiding issues like holes in the 3D model and facilitating subsequent optimization. However, for pervasive objects, using a sparse initialization can largely prevent the occurrence of adhesion between multiple small objects during the optimization process. DreamScape randomly samples a small number of points within the bounding box of pervasive objects for initialization, similar to a \"condensation nucleus,\" around which the Gaussians of pervasive objects are subsequently densified. Since Gaussian points will move during subsequent optimization processes, a viable setup is to use Conference\u201917, July 2017, Washington, DC, USA Xuening Yuan, Hongyu Yang, Yueming Zhao, Di Huang uniform sampling, formulated as: x \u223cUniform(\ud835\udc4e,\ud835\udc4f)3, (13) where x \u2208R3 represents the position of a Gaussian point, with each element independently sampled from a uniform distribution over the interval [\ud835\udc4e,\ud835\udc4f]. Sparse densification and pruning strategy. For conventional objects, the densification process typically prioritizes areas with holes for high-frequency densification, while adopting a pruning strategy for isolated Gaussians. However, this strategy is unsuitable for pervasive objects. DreamScape modifies the strategy for pervasive objects. Specifically, the frequency of densification are appropriately reduced, and the pruning strategy considers pruning Gaussians with large scales to prevent object clustering. Under the premise of sparse initialization, this strategy simply and effectively prevents the generation of overly large Gaussian clusters within pervasive objects and allows for appearance optimization of small objects at the scene scale. Sparse densification can be formulated as: \ud835\udf08\u2032 = \ud835\udf0f\u00b7 \ud835\udf08, (14) where \ud835\udf08is the original densification frequency, \ud835\udf08\u2032 indicates the adjusted frequency, and \ud835\udf0fis the adjustment factor. For the pruning strategy, a threshold \ud835\udf0c\ud835\udf03is set. Gaussians greater than \ud835\udf0c\ud835\udf03will be preferentially removed during the pruning process. 4 EXPERIMENTS Implementation Details. We employ GSGEN [5] as the 3D content generation baseline, which exhibits decent performance on the generation of 3D Gaussian objects. We utilize a batch size of 8, requiring approximately 4000 iterations for each object\u2019s local training step as well as the global training step. We set the learning rates of p, q and s to 5 \u00d7 10\u22123, and the learning rates for \ud835\udefc, c are configured at 3 \u00d7 10\u22123, 1 \u00d7 10\u22122. Learning rate for synchronized optimization of 3\ud835\udc37\ud835\udc3a2 is set to 1 \u00d7 10\u22122, while other coefficients and parameters are set as \ud835\udf061 = 10\u22121, \ud835\udf062 = 1, \ud835\udf063 = 10\u22121, \ud835\udc64= 1000, \ud835\udefe= 4000. Experimental evidence suggests that randomly sampling 128-512 points for pervasive objects is reasonable, depending on the size of the space and the class of pervasive objects. The adjustment factor \ud835\udf0fand scale threshold \ud835\udf0c\ud835\udf03of for sparse densification and pruning is 0.5 and 10\u22122. 4.1 Quantitative Comparison In order to evaluate the capabilities of our model, we have conducted a comprehensive comparison with existing state-of-the-art generative models, including NeRF-based methods [7, 25, 40, 42] and 3D Gaussian-based ones [5, 6]. CLIP similarity is used to measure semantic accuracy between the generated results of the models and the original text prompts. We evaluate on 4 cases and captured 10 views in the rendered results of 3D content as image outputs to measure the CLIP similarity with the input texts. The comparative results are presented in Table 1. Comparison with object generation methods. Methods targeting single-object generation include SJC [40], LatentNeRF [25], ProlificDreamer [42],and GSGEN [5], etc. These approaches often suffer from semantic loss of certain aspects of the text prompts Table 1: CLIP similarity comparison with existing state-ofthe-art text-to-3D methods. Method Case1 Case2 Case3 Case4 Ave. SJC [40] 0.272 0.190 0.300 0.276 0.260 LatentNeRF [25] 0.301 0.222 0.303 0.335 0.290 LucidDreamer [6] 0.349 0.284 0.278 0.307 0.305 ProlificDreamer [42] 0.324 0.250 0.259 0.307 0.285 GSGEN [5] 0.294 0.285 0.294 0.300 0.293 Set-the-Scene [7] 0.270 0.267 0.298 0.334 0.292 DreamScape(Ours) 0.335 0.288 0.308 0.342 0.318 Table 2: User study results evaluating the quality, consistency, and rationality of the compared methods. Method Quality Consistency Rationality SJC [40] 1.648 2.366 1.801 LatentNeRF [25] 1.228 2.480 1.576 LucidDreamer [6] 2.775 1.684 2.766 ProlificDreamer [42] 2.948 1.573 2.630 GSGEN [5] 2.549 3.107 3.018 Set-the-Scene [7] 3.069 3.324 2.972 DreamScape(Ours) 3.242 3.526 3.512 during the scene generation due to their modeling paradigm, particularly when dealing with more complex scenes.This results in outputs that significantly diverge from the original text prompts. In contrast, DreamScape employs a process of decomposing the objects within the text prompt, with each object individually modeled. This approach substantially enhances the fidelity to the text prompts, as evidenced by a significant improvement in average CLIP similarity. Comparison with scene generation methods. Methods targeting scene generation mainly include Set-the-Scene [7] and LucidDreamer [6]. Set-the-Scene requires precise object shape proxies to control geometries and positions of the objects, while we directly adopt the simplified positional proxies provided by LLMs for scene representation. Since LucidDreamer necessitates an input image for generating 3D Gaussians, we employ Stable Diffusion to generate the required images with the text prompts for LucidDreamer. Set-the-Scene excels in comprehensive modeling of objects within scenes due to its precise positional control. However, its overall generative performance is compromised by the absence of detailed shape proxies. LucidDreamer, leveraging images as prompts, demonstrates outstanding performance in rendering 3D Gaussians from certain views. Nonetheless, it struggles to produce meaningful results from other views, yet due to its partial overfitting, it maintains a relatively high average CLIP similarity. In contrast to these methods, DreamScape requires only a textual prompt as input to yield consistent multi-perspective 3D representations, faithfully reflecting the descriptions provided in the text. DreamScape Conference\u201917, July 2017, Washington, DC, USA SJC LatentNeRF LucidDreamer ProlificDreamer GSGen Ours Set-the Scene \u201cAn astronaut stood under a big tree. Pink petals flying in the air, and some petals were piled up on the grass\u201d \u201cA modern style bedroom with a nightstand in dark, a lamp illuminated the surroundings\u201d \u201cAn amusement park swing, a seesaw, a glowing street lamp on the grass\u201d \u201cA snowman wearing a Santa hat is standing on a cobblestone road. There is a lot of snow on the road. Snow is flying in the air\u201d Figure 4: Qualitative comparisons of typical text-to-3D generation methods (zoom in for a better view). 4.2 Qualitative Comparison In addition to quantitative experiments, we conducted a qualitative comparison with the aforementioned state-of-the-art generative methods to further demonstrate the capabilities of our model. We selected four representative scene generation tasks, ensuring that the experimental settings remained consistent with the official documentation of the contrasted methods. As previously mentioned, for Set-the-Scene [7] and LucidDreamer [6], we also implemented reasonable additional generation conditions to ensure maximum fairness in the experiments. Figure 4 illustrates the comparative results of our model against existing generative methods. Comparison with object generation methods. In comparison with methods targeting single-object generation, the inadequacies of such approaches in scene generation are evident from the figure. Methods like SJC [40] and LatentNeRF [25] exhibit subpar performance in scene generation due to their limited ability to notice multiple objects within scenes and model them effectively. As a method focused on single-object generation, ProlificDreamer [42] demonstrates better scene adaptability, capable of reasonably modeling scenes to some extent. This capability has stem from its unique VSD loss, which equips it with the ability to handle complex scenes to some degree. However, it still falls short of fully modeling multiple objects comprehensively, which modeling of the scene does not have multi-angle consistency, and rendering from some angles will get broken results. On the other hand, GSGEN [5] tends to generate \"multi-faces\" scenes directly, meaning it generates scenes from various angles on the outer surface of a single object. Overall, while these methods excel in generating individual objects, they still lack the ability to generate scenes. Comparison with scene generation methods. In comparison with methods targeting single-object generation, methods focused on scene generation generally exhibit more reasonable performance in their generated outputs, including more plausible spatial relationships. However, it is noticeable that although with object proxies provided by LLMs, the results produced by Set-the-Scene [7] appear blurry, a characteristic that persists even with increased training epochs. Conversely, LucidDreamer [6] can generate highly proficient results, but the range of renderable angles for correct results is extremely narrow. Slightly expanding the rendering range may lead to unpredictable outcomes. In contrast to these methods, DreamScape demonstrates remarkable stability, capable of generating rational results from any visual perspective given a single textual prompt. Zoom in for a better view. In particular, we compare the recent 3D scene generation method GALA3D [48], which also generate scenes with only a text input using LLMs, achieving the best consistency in the generation of scenes. However, GALA3D does not take full advantage of the understanding of LLMs, so the generated scenes are somewhat blunt and require a complex optimization process. Since its code is not publicly available, we use the results given in the paper directly. The results are shown in Figure 5. As can be seen from the figure, our method has better scene consistency and expressiveness, rather than just putting objects together reasonably. Conference\u201917, July 2017, Washington, DC, USA Xuening Yuan, Hongyu Yang, Yueming Zhao, Di Huang GALA3D Ours \u201cA puppy lying on the iron plate on the top of Great Pyramid.\u201d \u201cPanda in a wizard hat sitting on a Victorian-style wooden chair and looking at a Ficus in a pot.\u201d Figure 5: Visual comparison with GALA3D [48]. w/o.&with Sparse initialization and densification w/o.&with Progressive scale control w/o.&with Local-global training w/o.&with Synchronized optimization of 3\ud835\udc37\ud835\udc3a2 Figure 6: Visualization of ablation experiments. We have carried out ablation experiments on several modules in DreamScape and proved their effectiveness. Figure 7: Demonstration of editing ability. DreamScape can edit the generated results in real time, including position transformation, scaling, rotation, etc. User study. We employed a user study to analyze the generative capacity of our model and compared it with other models. We designed a questionnaire covering three aspects: generative quality, multi-view consistency, and rationality of generated results, with ratings ranging from 1 to 5 for each aspect. For each method, we randomly selected 10 images from different scenes and perspectives for display. We collected 281 valid responses and averaged the score for each criterion, as shown in Table 2. The table reveals that our approach garnered the highest scores across all criteria, indicating superior user preference in terms of quality, consistency, and rationality. 4.3 Ablation Studies In order to validate the effectiveness of the proposed modules, we conducted ablation experiments on the local-global training, progressive scale control, synchronized optimization and sparse initialization and densification methods. We visually compared the results of the ablation experiments, as shown in Figure 6. It can be observed that without local-global training, the model lacks modeling of interactions between objects, whereas with localglobal training, reflection of the boat will be on the lake. Objects may collide with each other without optimizing 3\ud835\udc37\ud835\udc3a2, which is well solved by synchronized optimization of 3\ud835\udc37\ud835\udc3a2. Objects may undergo direct stretching in the global step, resulting in abnormal deformation. And with the addition of progressive scale control, objects can gradually deform to the required scale for the scene. By introducing sparse initialization and densification, pervasive objects can be accurately modeled. The experiments demonstrate that our proposed optimization modules are highly effective in enabling the model to generate higher-quality results. 4.4 Editing As an explicit representation method, 3D Gaussians offer high editability. Unlike many other scene generation methods that treat scenes as a whole [6, 12, 45], DreamScape decomposes scenes into individual objects for modeling. This approach enables convenient editing capabilities, as illustrated in Figure 7. Users can freely modify the positions, scales, and rotation angles of each object without further inference. This open editing approach allows for convenient application in various practical 3D modeling scenarios. However, it is worth noting that for cases involving interaction effects such as water ripples and light shadows, further optimization may be required after editing to maintain high fidelity. 5 CONCLUSION In this paper, we propose DreamScape, a novel pipeline for 3D scene generation, which leverages 3D Gaussian Guide as a bridge to facilitate the interaction between LLMs and Diffusion priors using only text descriptions. Through strategies such as local-global training, progressive scale control, and synchronized optimization of the 3\ud835\udc37\ud835\udc3a2, DreamScape facilitates interaction among multiple objects in the scene, achieving the generation of interactive 3D Gaussian scenes. Furthermore, to address challenges such as difficulty in generating pervasive objects, we proposed sparse initialization and densification strategies, further enhancing the immersion and the atmospheric quality of the generated scenes. Extensive experiments demonstrate that our model attains state-of-the-art performance in 3D scene generation. Limitations. Due to the adoption of the SDS loss, our method still requires a relatively large guidance scale to ensure model convergence, thereby leading to oversaturated colors in the results, DreamScape Conference\u201917, July 2017, Washington, DC, USA which is a common occurrence in methods based on SDS loss. Additionally, the model\u2019s capabilities are constrained by the abilities of single-object generation models. Even with the utilization of robust initialization to keep the 3D consistency, Janus problem may still arise in the results occasionally. To fundamentally address such issues, it is necessary to enhance the capabilities of the foundational models and tackle biases in the datasets."
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.11795v1",
+ "title": "Prompt-Driven Feature Diffusion for Open-World Semi-Supervised Learning",
+ "abstract": "In this paper, we present a novel approach termed Prompt-Driven Feature\nDiffusion (PDFD) within a semi-supervised learning framework for Open World\nSemi-Supervised Learning (OW-SSL). At its core, PDFD deploys an efficient\nfeature-level diffusion model with the guidance of class-specific prompts to\nsupport discriminative feature representation learning and feature generation,\ntackling the challenge of the non-availability of labeled data for unseen\nclasses in OW-SSL. In particular, PDFD utilizes class prototypes as prompts in\nthe diffusion model, leveraging their class-discriminative and semantic\ngeneralization ability to condition and guide the diffusion process across all\nthe seen and unseen classes. Furthermore, PDFD incorporates a class-conditional\nadversarial loss for diffusion model training, ensuring that the features\ngenerated via the diffusion process can be discriminatively aligned with the\nclass-conditional features of the real data. Additionally, the class prototypes\nof the unseen classes are computed using only unlabeled instances with\nconfident predictions within a semi-supervised learning framework. We conduct\nextensive experiments to evaluate the proposed PDFD. The empirical results show\nPDFD exhibits remarkable performance enhancements over many state-of-the-art\nexisting methods.",
+ "authors": "Marzi Heidari, Hanping Zhang, Yuhong Guo",
+ "published": "2024-04-17",
+ "updated": "2024-04-17",
+ "primary_cat": "cs.LG",
+ "cats": [
+ "cs.LG",
+ "cs.AI",
+ "cs.CV"
+ ],
+ "label": "Original Paper",
+ "paper_cat": "Diffusion AND Model",
+ "gt": "Semi-supervised learning (SSL) has been widely studied as a leading technique for utilizing abundant unlabeled data to reduce the reliance of deep learning models on extensively labeled datasets [Tarvainen and Valpola, 2017; Laine and Aila, 2017]. Traditional SSL methodologies, operate under a crucial yet often unrealistic assumption: the set of classes encountered during training in the labeled set is exhaustive of all possible categories in the dataset [Zhu, 2005]. This assumption is in- creasingly misaligned with the dynamic and unpredictable na- ture of real-world data, where new classes can emerge without being labeled, creating a critical gap in the model\u2019s knowledge and adaptability [Bendale and Boult, 2015]. This gap under- scores the necessity for an Open-World SSL (OW-SSL) setup [Cao et al., 2022], where the unlabeled data are not only from the classes observed in the labeled data but also cover novel classes that are previously unseen. The investigation of OW- SSL is essential for maintaining the efficacy and relevance of machine learning models in real-world applications, where encountering new classes is not an exception but a norm. Diffusion models (DM), initially inspired by thermodynam- ics [Sohl-Dickstein et al., 2015], have gained significant popu- larity, particularly in the realm of generative models [Yang et al., 2023; Luo, 2022]. Their application has yielded remark- able success, outperforming established generative models like Variational Autoencoders (VAEs) [Kingma and Welling, 2013] and Generative Adversarial Networks (GANs) [Goodfel- low et al., 2014], especially in the domain of image synthesis [Rombach et al., 2022]. Ongoing developments in DM have led to advancements such as higher-resolution image genera- tion [Ho et al., 2020], accelerated training processes [Song et al., 2021], and reduced computational costs [Rombach et al., 2022]. Beyond image generation, recent efforts on diffusion models explore their application in image classification, incor- porating roles as a zero-shot classifier [Clark and Jaini, 2023; Li et al., 2023], integration into SSL frameworks [You et al., 2023; Ho et al., 2023], and enhancing image classification within meta-training phases [Du et al., 2023]. This highlights the considerable extensibility of diffusion models. In this paper, we introduce a novel Prompt-Driven Feature Diffusion (PDFD) approach for Open-World Semi-Supervised Learning (OW-SSL), specifically designed to overcome the inherent challenges associated with the absence of labeled instances for novel classes in OW-SSL. Our approach har- nesses the strengths of diffusion models to enhance effective feature representation learning from labeled and unlabeled data through instance feature denoising guided by predicted class-discriminative prompts. Recognizing the computational demands of traditional diffusion processes, the adopted feature- level diffusion strategy offers enhanced efficiency and scala- bility compared to its image-level counterpart. Furthermore, feature-level diffusion operates in a representation space where the data is typically more abstract and generalizable, allow- ing the model to utilize the organized information present in labeled data and simultaneously adapt to new classes found within unlabeled data. A key aspect of PDFD is using class prototypes as prompts for the diffusion process. This choice is motivated by the generalizability of prototypes to novel, un- arXiv:2404.11795v1 [cs.LG] 17 Apr 2024 seen classes, helping knowledge transfer from seen classes to unseen classes which is crucial in OW-SSL. Furthermore, we incorporate a distribution-aware pseudo-label selection strat- egy during semi-supervised training, ensuring proportionate representation across all classes. In addition, PDFD uses a class-conditional adversarial learning loss [Mirza and Osin- dero, 2014] to align the prompt-driven features generated by the diffusion process with class-conditional real data features, reinforcing the guidance of class prototypes for the diffusion process. This integration effectively bridges SSL classification and adversarial learning, leveraging the diffusion model to enhance the fidelity of feature representation in relation to specific classes. To empirically validate our approach, we conduct extensive experiments across multiple benchmarks in SSL, Open Set SSL, Novel Class Discovery (NCD), and OW- SSL. The results demonstrate that the proposed PDFD model not only outperforms various comparison methods but also achieves state-of-the-art performance in these domains. The key contributions of this work can be summarized as follows: \u2022 We introduce a novel Prompt-Driven Feature Diffusion (PDFD) approach for OW-SSL, which enhances the fi- delity and generalizability of feature representation for respective classes by leveraging the strengths of diffusion models with properly designed prompts. \u2022 We deploy a class-conditional adversarial loss to support feature-level diffusion model training, strengthening the guidance of class prototypes for the diffusion process. \u2022 We utilize a distribution-aware pseudo-label selection strategy, ensuring balanced class representation within an SSL framework, while class-prototypes are computed on selected instances based on prediction reliability. \u2022 Our comprehensive empirical results demonstrate the superiority of PDFD over a range of SSL, Open-Set SSL, NCD, and OW-SSL methodologies.",
+ "main_content": "2.1 Semi-Supervised Learning Traditional Semi-Supervised Learning (SSL) Traditional SSL has focused on training with both labeled and unlabeled data from seen classes, and classifying unseen test examples into these ground-truth classes. Deep SSL, which applies SSL techniques to deep neural networks, can be categorized into entropy minimization methods such as ME [Grandvalet and Bengio, 2004], consistency regularization methods such as Tempral-Ensemble [Laine and Aila, 2017] and Mean-Teacher [Tarvainen and Valpola, 2017], and holistic methods like FixMatch [Sohn et al., 2020], MixMatch [Berthelot et al., 2019] and ReMixMatch [Berthelot et al., 2020]. However, these approaches face challenges when training data includes unlabeled examples from unseen classes. Open-Set Semi-Supervised Learning Open-set SSL enhances conventional SSL by recognizing the existence of unseen class examples within the training data while maintaining the premise that unseen classes in the test examples are supposed to just be detected as outliers. The primary aim in this context is to diminish the detrimental impact that data from unseen classes might have on the classification performance of seen classes. To tackle this unique challenge, several recent methodologies have employed distinctive strategies for managing unseen class data. Specifically, DS3L [Guo et al., 2020] addresses this issue by assigning reduced weights to unlabeled data from unseen classes, while CGDL [Sun et al., 2020] focuses on improving data augmentation and generation tasks by leveraging conditional constraints to guide the learning and generation process. OpenMatch [Cao et al., 2022] employs one-vs-all (OVA) classifiers for determining the likelihood of a sample being an inlier, setting a threshold to identify outliers. However, a common limitation of these approaches is their inability to classify examples from unseen classes. Novel Class Discovery (NCD) In this setting, training data contains labeled examples from seen classes and unlabeled examples from novel unseen classes. Distinct from open-set SSL, NCD aims to recognize and classify both seen and unseen classes in the test set. This problem set-up, first introduced in [Han et al., 2019b], has developed into various methodologies, primarily revolving around a two-step training strategy. Initially, an embedding is learned from the labeled data, followed by a fine-tuning process where clusters are assigned to the unlabeled data [Hsu et al., 2018; Han et al., 2019b; Fini et al., 2021]. A key feature in NCD is the use of the Hungarian algorithm [Kuhn, 1955] for aligning classes in the labeled data. For instance, Deep Transfer Clustering (DTC) [Han et al., 2019b] harnesses deep learning techniques for transferring knowledge between labeled and unlabeled data, aiding in the discovery of novel classes. Another approach, RankStats [Han et al., 2019a], utilizes statistical analysis of data features to identify new classes. Open World Semi-Supervised Learning Distinct from NCD, OW-SSL encompasses labeled training data from the seen classes and unlabeled training data from both the seen and novel unseen classes, offering the capacity of exploiting the abundant unlabeled data from seen classes that are frequently available in real-world applications. As it has just been introduced recently [Cao et al., 2022], the potentials of OW-SSL have yet to be fully explored, and very few methods have been developed to address its unique challenges. ORCA [Cao et al., 2022] implements a cross-entropy loss function with an uncertainty-aware adaptive margin, aiming to reduce the disproportionate impact of the seen (known) classes during the initial phases of training. NACH [Guo et al., 2022] brings instances of the same class in the unlabeled dataset closer together based on inter-sample similarity. 2.2 Diffusion Models Diffusion Probabilistic Models (DMs) Originating from principles in thermodynamics, the stochastic diffusion processes were first introduced to data generation in DMs [SohlDickstein et al., 2015]. A notable advancement in recent research is Denoising Diffusion Probabilistic Models (DDPMs) proposed in [Ho et al., 2020]. DDPMs introduce a noise network that learns to predict a series of noise, enhancing the efficiency of DMs in generating high-quality image samples. Additionally, Denoising Diffusion Implicit Models (DDIM) were introduced, building upon DDPMs by incorporating a non-Markovian diffusion process, resulting in an acceleration of the generative process [Song et al., 2021]. Latent Diffusion Models (LDMs) extend the diffusion process to the latent space, enabling DMs to be trained with more efficiency and on limited computational resources [Rombach et al., 2022]. They also introduced a cross-attention mechanism to DMs, providing the ability to incorporate conditional information in image generation. Nevertheless, training diffusion models in generating images is computationally intensive. Diffusion Models on Image Classification Diffusion Models on Image Classification is a newly emerging area that explores the potential of applying diffusion models to classification tasks. Both [Clark and Jaini, 2023] and [Li et al., 2023] consider the diffusion model as a zero-shot classifier. [Clark and Jaini, 2023] exploits pre-trained diffusion models and CLIP [Radford et al., 2021]. This approach involves generating image samples using text input, scoring, and classifying the image samples. Meanwhile, [Li et al., 2023] classifies image samples within the noise space. Exploring the application of diffusion models in semi-supervised learning tasks, [Ho et al., 2023] learns image classifier using pseudo-labels generated from the diffusion models. [You et al., 2023] uses the diffusion model as a denoising process to obtain bounding box outputs for pseudo-label generation in semi-supervised 3D object detection. [Du et al., 2023] introduces the concept of prototype-based meta-learning to diffusion models in image classification. During the meta-training phase, it leverages a task-guided diffusion model to gradually generate prototypes, providing efficient class representations. 3 Method 3.1 Problem Setup We consider the following OW-SSL setting. The training data comprises a labeled set Dl = {(xl i, yl i)}N l i=1 with N l instances, each paired with a corresponding one-hot label vector yl i , and an unlabeled set Du = {(xu i )}N u i=1 with N u instances. The set of classes present in the labeled set are referred to as seen classes, denoted as Ys, while the unlabeled data are sampled from a comprehensive set of classes Y, which includes both the seen classes Ys and additional unseen novel classes Yn, such that Y = Ys \u222aYn. The core challenge of OW-SSL is to learn a classifier from the training data that can accurately categorize an unlabeled test instance to any class in Y. We aim to learn a deep classification model that comprises a feature extractor f, parameterized by \u03b8feat, which maps the input data samples from the original input space X into a high level feature space Z, and a linear probabilistic classifier h, parameterized by \u03b8cls. The collective parameters of the deep classification model (h \u25e6f) are represented by \u03b8 = \u03b8feat \u222a\u03b8cls. 3.2 Diffusion Model Preliminaries Diffusion probabilistic models, often simply referred to as \u201cdiffusion models\u201d [Sohl-Dickstein et al., 2015; Ho et al., 2020], are a type of generative model characterized by a distinct Markov chain framework. The diffusion model comprises two primary processes: the forward process and the reverse process. The forward process (diffusion process) consists of a forward diffusion sequence, denoted by q(xt|xt\u22121), which represents a Markov chain that incrementally introduces Gaussian noise at each timestep t, starting from an initial clean sample (e.g., image) x0 \u223cq(x0). The forward diffusion process is described mathematically as: q(xT |x0) := T Y t=1 q(xt|xt\u22121), (1) where each step is defined via a Gaussian distribution: q(xt|xt\u22121) := N(xt; (1 \u2212\u03b2t)xt\u22121, \u03b2tI), (2) with \u03b2t representing a predefined variance schedule. By introducing \u03b1t := 1 \u2212\u03b2t and \u00af \u03b1t := Qt s=1 \u03b1s, one can succinctly express the diffused sample at any timestep t as: xt = \u221a\u00af \u03b1tx0 + \u221a 1 \u2212\u00af \u03b1t\u03f5, (3) where \u03f5 is a standard Gaussian noise, \u03f5 \u223cN(0, I). Due to the intractability of directly reversing the forward diffusion process, q(xt\u22121|xt), the model is trained to approximate this reverse process through parameterized Gaussian transitions, denoted as p\u03d5(xt\u22121|xt), with \u03d5 as the model parameters. Consequently, the reverse diffusion is modeled as a Markov chain starting from a noise distribution xT \u223cN(0, I), and is defined as: p\u03d5(x0:T ) := p\u03d5(xT ) T Y t=1 p\u03d5(xt\u22121|xt), (4) where the transition probabilities are given by: p\u03d5(xt\u22121|xt) = N(xt\u22121; \u00b5\u03d5(xt, t), \u03c32 t I), (5) with \u00b5\u03d5(xt, t) = 1 \u221a\u03b1t \u0012 xt \u22121 \u2212\u03b1t \u221a1 \u2212\u00af \u03b1t \u03be\u03d5(xt, t) \u0013 (6) where \u03be is the diffusion model parameterized by \u03d5, predicting the added noise. In this context, the diffusion model is trained using an objective function defined as follows: L\u03d5 = Et,x0,\u03f5 h\r \r\u03f5 \u2212\u03be\u03d5 \u0000\u221a\u00af \u03b1tx0 + \u221a 1 \u2212\u00af \u03b1t\u03f5, t \u0001\r \r2i (7) 3.3 Proposed Method In this section, we outline the proposed Prompt-Driven Feature Diffusion (PDFD) approach for OW-SSL. We present the method within a semi-supervised learning framework, where cross-entropy losses on the labeled data and the dynamically selected unlabeled data are jointly minimized. The key aspect of PDFD is to jointly train a feature-level diffusion model with class prototypes as prompts and the classification model through the minimization of a diffusion loss. This component is crucial for enhancing SSL by leveraging the strengths of diffusion models, ensuring semantic distinction and generalizability from the seen to the unseen classes. Furthermore, we incorporate a class-conditional adversarial loss to align the generated data from the diffusion model with the pseudo-labeled real data in the feature space Z, improving the alignment of feature representation for respective classes. The overall framework of PDFD is shown in Figure 1. Further elaboration will be provided below. ... \u00a0 Prompt-Driven Feature-Level Diffusion SSL with Dynamic Pseudo-Label Selection Class-Conditional Adversarial Alignment Labeled Image Unlabeled Image Figure 1: The proposed PDFD framework trained on Dl, Du. The feature encoder f takes as input the labeled data and unlabeled data to generate their learned embeddings. The embeddings of the labeled and unlabeled samples are used to calculate the class prototypes which are used as prompts for the diffusion model. The diffusion model, guided by the loss Ldiff, predicts the noise \u03be\u03d5 from noisy features. Concurrently, the classifier h and encoder f are trained, aiming to minimize the supervised loss Ll ce and the pseudo-labeling loss Lu ce. Additionally, a class conditional-adversarial training component is integrated, wherein the generator \u03be\u03d5 aims to produce feature representations that successfully mislead the discriminator D\u03c8, assessed by the adversarial loss Ladv, into categorizing them as real features. SSL with Dynamic Pseudo-Label Selection We perform semi-supervised learning over the entire class set Y by minimizing the cumulative loss over the labeled and unlabeled training data to learn the parameters \u03b8 of the classification model. For the labeled data in Dl, we employ the following standard cross-entropy loss: Ll ce(\u03b8) = E(xl i,yl i)\u2208Dl[\u2113ce(yl i, h\u03b8cls(f\u03b8feat(xl i)))] (8) where \u2113ce denotes the cross-entropy loss function. For the unlabeled data in Du, we initially produce their pseudo-labels using K-means clustering. Then in each following training iteration, the current classification model is utilized to predict the pseudo-labels of each unlabeled instance xu i as follows: b yi = h\u03b8cls(f\u03b8feat(b xu i )) (9) where b yi denotes a soft pseudo-label vector\u2014i.e., the predicted class probability vector with length |Y|; b xu i denotes a weakly augmented version of instance xu i . By using weak augmentation, we aim to capture the underlying structure of the unlabeled data while minimizing the impact of potential noise or distortions. The corresponding one-hot pseudo-label vector e yi can be produced from b yi by setting the entry with the largest probability as 1 while keeping other entries as 0s. Moreover, in order to minimize the impact of noisy pseudolabels and ensure a proportionate representation of all classes in the unlabeled data, we propose to dynamically select confident pseudo-labels to produce a distribution-aware subset of pseudo-labeled instances for model training. Specifically, for each class c \u2208Y, we choose a subset of instances, Cc, with confidently predicted pseudo-labels via a threshold \u03c4: Cc = {xu i \u2208Du|1 \u0000max(b yi)>\u03c4 \u2227arg maxj b yij =c \u0001 } (10) where the indicator function 1 \u0000\u00b7) presents the condition for instance selection. The minimum number of instances selected for each class can then be determined as Nm = minc |Cc|. To ensure a well-proportioned consideration for all the classes Y, we finally choose the top Nm instances from each pre-selected subset Cc based on the predicted pseudolabel scores, max(b yi), and form a selected pseudo-labeled set Q = {(xi, e yi), \u00b7 \u00b7 \u00b7 } with size Nm \u00d7 |Y|. The training loss on the unlabeled data is then computed as the cross-entropy loss on the confidently pseudo-labeled instances in Q: Lu ce(\u03b8) = E(xi,\u02dc yi)\u2208Q[\u2113ce(\u02dc yi, h\u03b8cls(f\u03b8feat(xu i )))] (11) Class-Prototype Computation Prior to introducing the key feature-level diffusion component, we first compute the class-prototypes that will be adopted as essential prompts for guiding the diffusion process. In particular, class prototypes are derived from the feature embeddings produced from the deep feature extractor f based on the (predicted) class labels. They hence encapsulate the core characteristics of classes in the high level semantic feature space Z that are generalizable to novel categories. For the seen classes in Ys, we calculate the class prototypes as average feature representations of the labeled data for each class, providing a stable reference point for the whole class set Y. Specifically for each class s \u2208Ys, we compute its class prototype vector ps as follows: ps = E(xl i,yl i)\u2208Dl \u0002 1 \u0000arg maxj yl ij = s \u0001 f\u03b8feat(xl i) \u0003 (12) where the indicator function 1(\u00b7) selects the instances that satisfy the given conditions\u2014belonging to class s in this case. For the unseen novel classes in Yn, the prototypes are computed differently to account for the uncertainty during the discovery of new classes on unlabeled data. Specifically, for each class n \u2208Yn, its class prototype vector pn is computed as the average feature representation of the unlabeled instances whose pseudo-labels are confidently predicted as class n: pn = Exu\u2208Du \u0002 1 \u0000max(b yi) > \u03c4 \u2227arg maxj b yij = n \u0001 f\u03b8feat(xu i ) \u0003 (13) where the threshold \u03c4 is used to filter out non-confident predictions and reliably identify novel unseen classes in the unlabeled data. By putting all these class prototypes together, we can form a class prototype matrix P = [p1, \u00b7 \u00b7 \u00b7 , p|Y|], whose each column contains a class prototype vector. Prompt-Driven Feature-Level Diffusion Traditional diffusion processes, while powerful, are often computationally intensive and time-consuming, particularly when applied directly to high-dimensional data such as images. By transposing the diffusion process to the feature level, we significantly reduce the computational burden, enabling faster training of the diffusion model and scalability of PDFD to large datasets. In addition, feature-level diffusion focuses on the high-level representation space where the data is often more abstract and generalizable, while the semantic aspects of the data captured in this space are more relevant and informative for classification. Image-level diffusion might inadvertently emphasize pixel-level details that are less important for understanding the underlying class or concept. By operating at the feature level, the model can leverage the global and structural information to distinguish novel classes from seen classes. To leverage the strengths of the diffusion model for class distinction and recognition, we introduce the class prototypes as an additional input to the standard diffusion model \u03be\u03d5, functioning as class-distinctive prompts for feature diffusion. Specifically, the model is tasked with predicting the added noise \u03f5 based on a noisy input feature vector, the class-specific prompt, and the current time step t: \u03be\u03d5 = \u03be\u03d5(\u221a\u00af \u03b1tf\u03b8feat(xi) + \u221a 1 \u2212\u00af \u03b1t\u03f5, P \u00b7 1ci, t) (14) where 1ci denotes a one-hot vector that indicating the predicted class of the corresponding input xi, such that ci = arg maxj h\u03b8cls(f\u03b8feat(xi))[j]; while P \u00b7 1ci chooses the corresponding class prototype vector as the prompt input. Same as in the standard diffusion model, the term \u00af \u03b1t is a pre-defined variance schedule, and \u03f5 is a noise variable sampled from the normal distribution. Following [Du et al., 2023], we employ a transformer-based diffusion model for \u03be\u03d5. We jointly train the diffusion model \u03d5 and the classification model \u03b8 (feature extractor \u03b8feat and classifier \u03b8cls) over all the labeled and unlabeled training instances by minimizing the following diffusion loss: Ldiff(\u03d5, \u03b8) = Exi\u2208Dl\u222aDuEt\u223c[0:T ][\u2225\u03f5 \u2212\u03be\u03d5\u22252] (15) The loss essentially measures the discrepancy between the added noise \u03f5 and the prediction of the generative diffusion model, guiding both the feature extractor and the diffusion model to produce feature representations that are coherent with the class prototypes and therefore suitable for both seen and unseen class identification. Class-Conditional Adversarial Alignment The data generation in our PDFD model is depicted through a reverse diffusion process, where we transform a random noise vector \u03f5 in a sequence of T steps into meaningful feature vectors in the high-level feature representation space Z, guided by a class prototype based prompt. The process is mathematically represented as: zt\u22121 = ( \u03f5 if t = T, 1 \u221a\u03b1t \u0010 zt\u22121\u2212\u03b1t \u221a1\u2212\u00af \u03b1t \u00b7 \u03be\u03d5(zt, pc, t) \u0011 if t < T (16) where zt denotes the diffused feature embedding vector at time step t. For simplicity, we define this reverse diffusion process as a generative function g\u03d5(\u03f5, T, pc), which takes the initial noise vector \u03f5, the total number of time steps T, and the prompt pc as inputs, and generates a diffused clean feature vector z0: z0 = g\u03d5(\u03f5, T, pc) (17) Here, g conveniently encapsulates the iterative reverse diffusion process, transforming the initial noise \u03f5 into the refined feature representation z0 through a sequence of T steps of transformations governed by the specified prompt and the dynamics of the diffusion process in Eq.(16). In advancing our model\u2019s robustness and diffusion capacity, we propose to align the generated feature vectors with the unlabeled real training data in the high-level feature space Z through a class-conditional adversarial loss defined as follows: Ladv(\u03d5, \u03c8) = Ex\u223cDu[log D\u03c8(f\u03b8feat(x), \u02dc y)] + E\u03f5\u223cN (0,I),c\u223cY[log(1 \u2212D\u03c8(g\u03d5(\u03f5, T, pc), 1c))], (18) where D\u03c8 is a class-conditional discriminator parameterized by \u03c8, which tries to maximumly distinguish the feature vectors of the real data from the generated feature vectors using the reverse diffusion process given the conditional one-hot label vector. This adversarial loss is tailored to refine the model\u2019s ability to generate class-specific features. By playing a minimax adversarial game between the diffusion model \u03d5 and the discriminator \u03c8, min \u03d5 max \u03c8 Ladv(\u03d5, \u03c8), (19) this class-conditional adversarial alignment loss encourages the diffusion model to generate features that are indistinguishable from real data features, enhancing the fidelity of feature representation w.r.t respective classes across both the seen and unseen classes in Y. Joint Training of PDFD Incorporating the SSL losses on both labeled and unlabeled data sets, alongside the diffusion and adversarial losses, we formulate the joint training objective for our PDFD model as follows: min \u03b8,\u03d5 max \u03c8 Ltr = Ll ce + \u03b3uLu ce + \u03b3diffLdiff + \u03b3advLadv (20) where \u03b3u, \u03b3diff and \u03b3adv are trade-off hyper-parameters. Table 1: Classification accuracy (%) on CIFAR-10, CIFAR-100, and ImageNet-100. Classes Dataset SSL Open-Set SSL NCD Open-World SSL Fixmatch DS3L CGDL DTC RankStats ORCA NACH PDFD (ours) Seen CIFAR-10 71.5 77.6 72.3 53.9 86.6 88.2 89.5 90.2 CIFAR-100 39.6 55.1 49.3 31.3 36.4 66.9 68.7 70.2 ImageNet-100 65.8 71.2 67.3 25.6 47.3 89.1 91.0 91.3 Average 59.0 68.0 63.0 36.9 56.8 81.4 83.1 83.9 Unseen CIFAR-10 50.4 45.3 44.6 39.5 81.0 90.4 92.2 93.1 CIFAR-100 23.5 23.7 22.5 22.9 28.4 43.0 47.0 49.5 ImageNet-100 36.7 32.5 33.8 20.8 28.7 72.1 75.5 76.1 Average 36.9 33.9 33.6 27.7 46.0 68.5 71.6 72.9 All CIFAR-10 49.5 40.2 39.7 38.3 82.9 89.7 91.3 92.1 CIFAR-100 20.3 24.0 23.5 18.3 23.1 48.1 52.1 52.9 ImageNet-100 34.9 30.8 31.9 21.3 40.3 77.8 79.6 80.6 Average 34.9 31.7 31.7 26.0 48.8 71.9 74.3 75.2 4 Experiments 4.1 Experimental Setup Datasets We evaluate our model using established benchmarks in image classification: CIFAR-10, CIFAR-100 [Krizhevsky et al., 2009], and a subset of ImageNet [Deng et al., 2009]. The chosen ImageNet subset encompasses 100 classes, given its expansive size. Each dataset is partitioned such that the first 50% of the classes are considered \u2019seen\u2019 and the rest as \u2019novel\u2019. For these seen classes, we label 50% of the samples and the remainder constitutes the unlabeled set. The results presented in this study were obtained from evaluations on an unseen test set, which comprises both previously seen and novel classes, ensuring a comprehensive assessment of the model\u2019s performance. We repeated all experiments for 3 runs and reported the average results. Experimental Setup Following the compared methods [Cao et al., 2022; Guo et al., 2022], we pretrain our model using simSLR [Chen et al., 2020] method. In our experiments with the CIFAR datasets, we chose ResNet-18 as our primary backbone architecture. The training process involves Stochastic Gradient Descent (SGD) with a momentum value set at 0.9 and a weight decay factor of 5e-4. The training duration is 200 epochs, using a batch size of 512. Only the parameters in the final block of ResNet are updated during the training to prevent overfitting. For the ImageNet dataset, the backbone model selected is ResNet-50 employing standard SGD for training, with a momentum of 0.9 and a weight decay of 1e-4. We train the model for 90 epochs and maintain the same batch size of 512. Across all our experiments, we apply the cosine annealing schedule to adjust the learning rate. Specifically for PDFD we set \u03b3u to 0.5, \u03b3diff to 1, \u03b3adv to 1, \u03c4 to 0.5 and T to 50. Regarding the architecture of the diffusion model, we adopt a transformer-based model in line with the methodology outlined in [Du et al., 2023]. The discriminator consists of three linear layers, with the first two followed by batch normalization and a ReLU activation function. 4.2 Comparison Results We conducted a comprehensive comparison of our PDFD method with various state-of-the-art SSL methods across different settings, including Fixmatch [Sohn et al., 2020] for standard SSL, DS3L [Guo et al., 2020] and CGDL [Sun et al., 2020] for open-set SSL, DTC [Han et al., 2019b] and RankStats [Han et al., 2019a] for NCD, and ORCA [Cao et al., 2022] and NACH [Guo et al., 2022] for OW-SSL. The evaluation included datasets of varying scales, namely CIFAR10, CIFAR-100 [Krizhevsky et al., 2009], using Resnet-18 backbone and ImageNet-100 [Russakovsky et al., 2015] using Resnet-50 backbone. The results presented in this study were obtained from evaluations on an unseen test set, which comprises both previously seen and novel classes, ensuring a comprehensive assessment of the model\u2019s performance. The comparative results are presented in Table 1. The results for all classes illustrate method performance in an OW-SSL setting where both seen and unseen classes are included in the test set. Our PDFD method outperforms all comparison methods across all datasets. Notably, on the ImageNet-100 dataset, PDFD exhibits a significant improvement of 1.0% on all classes compared to the previous state-of-the-art method NACH. It also demonstrates a 0.8% margin of improvement over the second-best algorithm on the CIFAR-10 and CIFAR100 datasets. The results show an overall performance increase of 0.9% on the average of all three datasets. We also evaluated the effectiveness of the methods in classifying unseen classes. On unseen classes, PDFD outperforms the previous best method on all three datasets. PDFD performs exceptionally well on the CIFAR-100 dataset, surpassing the secondbest method with a significant improvement of 2.5% on unseen classes. On both CIFAR-10 and ImageNet-100 datasets, PDFD also surpasses the previous best methods, exhibiting a 0.9% and 0.6% increase in overall performance across all three datasets on unseen classes. Despite the special treatment of novel classes in the unlabeled dataset, PDFD also demonstrates strong performance in standard SSL tasks. PDFD outperforms all previous SSL methods, even on standard SSL tasks on seen classes. PDFD exhibits a similar pattern on the CIFAR-100 dataset as on unseen classes, with a significant improvement of 1.5% over the second-best algorithm. PDFD also shows a 0.8% increase compared to the secondbest method in the average classification accuracy across all three datasets, demonstrating the best overall performance. Table 2: Ablation Study on the effect of different types of prompt. classification accuracy (%) on CIFAR-100. Prompt Seen Unseen All h\u03b8cls(f\u03b8feat(xi)) 67.2 46.1 50.8 1c 69.2 47.8 52.0 P.1c (PDFD) 70.2 49.5 52.9 Table 3: Ablation Study classification accuracy (%) on CIFAR-100. Seen Unseen All PDFD 70.2 49.5 52.9 \u2212w/o Ll ce 57.6 24.9 45.5 \u2212w/o Lu ce 67.9 45.6 49.3 \u2212w/o Ldiff 67.1 46.4 48.7 \u2212w/o Ladv 68.0 46.9 50.1 \u2212w/o Ladv and Ldiff 66.6 45.2 47.7 \u2212w/o Class condition 68.1 47.1 50.7 4.3 Ablation Study Ablation on different prompts We conducted an ablation study to investigate employing different types of prompts in PDFD. We compared the classification accuracy on the CIFAR-100 dataset with the full PDFD model, which used prototype corresponding to class prediction (P.1c) as prompts, and two ablation variants. (1) \u201ch\u03b8cls(f\u03b8feat(xi))\u201d, which uses raw probability prediction output for the sample and (2) \u201c1c\u201d which uses one-hot encoding of the prediction output from the feature extractor. The results of the ablation study are presented in Table 2. Notably, utilizing prototypes as prompts achieved the highest accuracy among all three variants. Particularly in unseen classes, the use of prototypes significantly improved the classification performance. This finding suggests that class prototypes are a suitable way to implement prompts in our method, especially in enhancing the performance of PDFD in classifying unseen examples. Ablation on different components We conducted an ablation study to investigate the impact of different components in PDFD on the overall performance. The study focused on classification accuracy using the CIFAR-100 dataset, with six ablation variants: (1) \u201c\u2212w/o Ll ce\u201d excluding cross entropy loss on labeled data; (2) \u201c\u2212w/o Lu ce\u201d excluding cross entropy loss on unlabeled data; (3) \u201c\u2212w/o Ldiff\u201d excluding diffusion loss, disabling the feature-level diffusion model; (4) \u201c\u2212w/o Ladv\u201d excluding adversarial loss, disabling adversarial training; (5) \u201c\u2212w/o Ladv and Ldiff\u201d excluding both adversarial training and diffusion model; (6) \u201c\u2212w/o Class condition\u201d excludes the prompt in the diffusion model and class condition in adversarial training. The ablation study results are presented in Table 3. PDFD achieves the highest classification accuracy across seen, unseen, and all classes, emphasizing the effectiveness of all model components. Notably, excluding supervised learning (\u201c\u2212w/o Ll ce\u201d) results in the most decreased accuracy. Excluding the diffusion model (\u201c\u2212w/o Ldiff\u201d) significantly lowers accuracy on seen and all classes, emphasizing the importance of this model component. While further excluding adversarial training (\u201c\u2212w/o Ladv and Ldiff\u201d) does not markedly impact seen class accuracy, it does lead to reduced performance on un(a) Confidence difference between seen and unseen classes (b) Accuracy of pseudo-labels for unseen classes. Figure 2: Pseudo-Label Selection Analysis. (a) Confidence difference between seen and unseen classes during the training on CIFAR-100 (b) Effect of distribution-aware pseudo-label selection on learning unseen classes during the training on CIFAR-100. seen and all classes, supporting the goal of adversarial training to learn indistinguishable pseudo-labels for novel classes. The exclusion of cross entropy loss on unlabeled data (\u201c\u2212w/o Lu ce\u201d) results in a dramatic decrease in model performance on unseen and all classes. This finding supports the significance of each component in contributing to the effectiveness of PDFD. Pseudo-Label Selection Analysis Figure 2 illustrates the learning analysis of pseudo-labels throughout the training process. As depicted in subfigure (a), it is evident that the seen classes satisfy the confidence condition earlier than the unseen classes. Consequently, this leads to the under-representation of unseen classes in the initial stages of training, culminating in a suboptimal initialization of the model. This early skew towards seen classes can potentially bias the model\u2019s learning, impacting its ability to effectively recognize and adapt to the characteristics of the unseen classes as training progresses. In subfigure (b), the positive impact of our proposed component, distribution-aware pseudo-label selection, on the learning of unseen classes is visible. This method effectively addresses the initial imbalance observed in the learning process, enhancing the model\u2019s ability to recognize and accurately classify unseen classes. By considering the distribution characteristics of the data, our solution ensures a more equitable representation of classes in the training process, leading to improved model performance and generalization. 5 Conclusion In this paper, we proposed a novel Prompt-Driven Feature Diffusion (PDFD) approach to address the challenging setup of Open-World Semi-supervised Learning. The proposed PDFD approach deploys an efficient feature-level diffusion model with class-prototypes as prompts, enhancing the fidelity and generalizability of feature representation across both the seen and unseen classes. In addition, a class-conditional adversarial loss is further incorporated to support diffusion model training, strengthening the guidance of class prototypes for the diffusion process. Furthermore, we also utilized a distribution-aware pseudo-label selection strategy to ensure balanced class representation for SSL and reliable class-prototypes computation for the novel classes. We conducted extensive experiments on several benchmark datasets. Notably, our approach has demonstrated superior performance over a set of state-of-theart methods for SSL, open-set SSL, NCD and OW-SSL."
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.08273v2",
+ "title": "Struggle with Adversarial Defense? Try Diffusion",
+ "abstract": "Adversarial attacks induce misclassification by introducing subtle\nperturbations. Recently, diffusion models are applied to the image classifiers\nto improve adversarial robustness through adversarial training or by purifying\nadversarial noise. However, diffusion-based adversarial training often\nencounters convergence challenges and high computational expenses.\nAdditionally, diffusion-based purification inevitably causes data shift and is\ndeemed susceptible to stronger adaptive attacks. To tackle these issues, we\npropose the Truth Maximization Diffusion Classifier (TMDC), a generative\nBayesian classifier that builds upon pre-trained diffusion models and the\nBayesian theorem. Unlike data-driven classifiers, TMDC, guided by Bayesian\nprinciples, utilizes the conditional likelihood from diffusion models to\ndetermine the class probabilities of input images, thereby insulating against\nthe influences of data shift and the limitations of adversarial training.\nMoreover, to enhance TMDC's resilience against more potent adversarial attacks,\nwe propose an optimization strategy for diffusion classifiers. This strategy\ninvolves post-training the diffusion model on perturbed datasets with\nground-truth labels as conditions, guiding the diffusion model to learn the\ndata distribution and maximizing the likelihood under the ground-truth labels.\nThe proposed method achieves state-of-the-art performance on the CIFAR10\ndataset against heavy white-box attacks and strong adaptive attacks.\nSpecifically, TMDC achieves robust accuracies of 82.81% against $l_{\\infty}$\nnorm-bounded perturbations and 86.05% against $l_{2}$ norm-bounded\nperturbations, respectively, with $\\epsilon=0.05$.",
+ "authors": "Yujie Li, Yanbin Wang, Haitao Xu, Bin Liu, Jianguo Sun, Zhenhao Guo, Wenrui Ma",
+ "published": "2024-04-12",
+ "updated": "2024-04-18",
+ "primary_cat": "cs.CV",
+ "cats": [
+ "cs.CV",
+ "cs.CR"
+ ],
+ "label": "Original Paper",
+ "paper_cat": "Diffusion AND Model",
+ "gt": "Since the inception of ImageNet [1] and its associated competitions, researchers have made significant strides in image classification tasks, particularly with deep neural networks achieving notable suc- cess in this domain. Previous endeavors have consistently deepened and broadened networks [2\u20135], employed residual structures [5, 6], and utilized transformer architectures [7\u20139]. These progressively Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. ACM MM, 2024, Melbourne, Australia \u00a9 2024 Copyright held by the owner/author(s). Publication rights licensed to ACM. ACM ISBN 978-x-xxxx-xxxx-x/YY/MM https://doi.org/10.1145/nnnnnnn.nnnnnnn refined models consistently establish new benchmarks across sig- nificant datasets, showcasing exceptional performance. However, these models are trained and evaluated on samples from natural datasets, rendering them susceptible to disruptions. Adversarial attacks adeptly introduce imperceptible perturbations into image data, leading to misclassification by neural networks and yield- ing wholly inaccurate outcomes. Consequently, adversarial attacks have emerged as a common evaluation method for assessing model robustness. Given the crucial role of image classification tasks in fields such as facial recognition [10, 11], medical health [12, 13], and remote sensing [14, 15], the defense against adversarial attacks emerges as a key security concern. Presently, common defensive strategies include adversarial training and image denoising. Notably, the pu- rification approach, which employs diffusion models for denoising, has exhibited promising outcomes. This technique entails utilizing a diffusion model for the generation of image samples through noise addition and subsequent denoising processes, intended for classification or adversarial training purposes. However, it is sus- ceptible to high-intensity adaptive attacks, and the classification performance of the classifier on images post-purification remains suboptimal. We contend that a limiting factor constraining further augmentation of diffusion-based purification efficacy lies in the necessity for images processed by the diffusion model to undergo subsequent inference by discriminative classifiers, that is, in other words, the efficacy of the purification method is partly constrained by the classifier. The noise addition and denoising processes of the diffusion model may disrupt the data distribution of original im- ages, which adheres to the data boundaries learned by the classifier, thereby impeding performance enhancement. Hence, one might inquire, why not utilize the diffusion model alone directly for image classification? Diffusion models represent a contemporary class of powerful image generation models, distinguished by their inference pro- cess comprising forward diffusion and backward denoising stages predominantly. In the forward process, the model systematically introduces Gaussian noise to the image, whereas in the backward process, it undertakes denoising of the perturbed data. Throughout the training phase, Gaussian noise parameters are parameterized utilizing Evidence Lower Bound (ELBO) [16]. The diffusion model utilizes neural networks to predict the Gaussian noise added during the forward process to the samples and compute the loss against the ground truth. Previous research has transformed the Stable Diffusion, a conditional diffusion model, into a generative classifier known as the Diffusion Classifier [17], leveraging Bayesian theorem and computing Monte Carlo estimates for the noise predictions of each class. Li et al. [17] scrutinized its zero-shot performance as a classifier, whereas our study, differently, delves into the adversarial robustness of the Diffusion Classifier. During the inference process of the Diffusion Classifier, each class label undergoes transformation into prompts that are fed arXiv:2404.08273v2 [cs.CV] 18 Apr 2024 ACM MM, 2024, Melbourne, Australia Anonymous Authors into the model, directing it to infer parameterized noise predic- tions and compute losses against the ground truth. Subsequently, unbiased Monte Carlo estimates of the expected losses for each class are derived, and the final classification outcome is obtained through Bayesian theorem. Conceptually, this inference process entails comparing the relative magnitudes of model inference losses under different prompts. Hence, theoretically, it can be posited that adversarial attacks, which involve perturbations constrained by norms added to original images, would not significantly impact the inference outcomes of the Diffusion Classifier. Consequently, we propose the assertion that the Diffusion Classifier exhibits adversar- ial robustness, a proposition substantiated by empirical evidence. Furthermore, we introduce the Truth Maximization optimization method. This approach involves training the model with adver- sarially perturbed input data and conditioning it on text prompts composed of ground-truth labels. The objective is to minimize the prediction loss of parameterized noise in the diffusion process, thereby optimizing model parameters, which enables the model to learn the ability to accurately model image data into the correct cat- egories under adversarial perturbations. The optimization scheme aims to maximize the posterior probability values corresponding to the correct class under Bayesian inference, thereby mitigating significant disruptions in the relative posterior probabilities under attack. The classifier trained using this methodology is denoted as the Truth Maximized Diffusion Classifier (TMDC). Our study focuses on investigating the adversarial robustness of the Diffusion Classifier, a generative classifier based on the diffusion model. We propose the Truth Maximization approach to bolster the Diffusion Classifier\u2019s robustness against adversarial attacks through training. We conducted comparative analyses between the Diffusion Classifier and TMDC against other commonly utilized neural network classifiers, assessing their resilience under strong adaptive combined attacks and classical white-box attacks. Experi- mental findings demonstrate the exceptional adversarial robustness of the Diffusion Classifier relative to alternative classifiers. Further- more, the efficacy of the Truth Maximization optimization method is confirmed. The optimized classifier, TMDC, achieves remarkable testing accuracies of 82.81% (\ud835\udc59\u221e) and 86.05% (\ud835\udc592) on the CIFAR-10 dataset under robust Auto Attack [18] settings with parameters set to \ud835\udf16=0.05 and version set to \u201cplus\u201d, thereby attaining the cur- rent state-of-the-art performance level. The code for our work is available on GitHub [19].",
+ "main_content": "Since the breakthrough success of AlexNet in 2012 [2], deep neural networks (DNNs) have become pivotal in the realm of computer vision research and application. Subsequent advancements, exemplified by models such as VGG [20], ResNet [6], ViT [9], and their numerous variants have significantly advanced the state-of-the-art in image classification tasks across prominent datasets. However, despite their outstanding performance in conventional tasks, these models are highly vulnerable to adversarial attacks \u2013 techniques devised to mislead deep learning models by introducing imperceptible perturbations to natural data. To assess the robustness of these models, numerous adversarial attack methods have been proposed by previous researchers under both black-box and white-box paradigms [18, 21\u201325], with the aim of effectively compromising neural networks. Common strategies to bolster models against such attacks include adversarial training [23], which involves incorporating adversarial perturbations into the training data to improve the model\u2019s performance under adversarial conditions. Additionally, methods such as adversarial purification [26, 27] have recently gained widespread attention. This approach, focusing on data rather than the model, mitigates adversarial attacks by adding noise into adversarial samples and subsequently denoising them. Nonetheless, such processes may introduce gradient obfuscation issues [28]. 2.2 Generative Classifiers Diverging from discriminative methods that directly delineate data boundaries for image classification, generative approaches, akin to Naive Bayes, first learn the distribution characteristics of image data and then address classification tasks through maximum likelihood estimation modeling. Models such as Naive Bayes [29], EnergyBased Models (EBM) [30, 31], and the Diffusion Classifier [17] are constructed under the generative paradigm. Taking Naive Bayes as an example, it models the input image x and label y to derive the data likelihood \ud835\udc5d(\ud835\udc65|\ud835\udc66), thereby accomplishing classification through maximum likelihood estimation to derive \ud835\udc5d(\ud835\udc66|\ud835\udc65). \ud835\udc5d(\ud835\udc66\ud835\udc56| \ud835\udc65) = \ud835\udc5d(\ud835\udc66\ud835\udc56) \ud835\udc5d(\ud835\udc65| \ud835\udc66\ud835\udc56) \ufffd \ud835\udc57\ud835\udc5d\ufffd\ud835\udc66\ud835\udc57 \ufffd\ud835\udc5d\ufffd\ud835\udc65| \ud835\udc66\ud835\udc57 \ufffd (1) del (JEM) [32], utilizing EBM, reinterprets the stanive classifier of as the joint distribution \ufffd \ufffd \ufffd\ufffd| \ufffd Joint Energy Model (JEM) [32], utilizing EBM, reinterprets the standard discriminative classifier of \ud835\udc5d(\ud835\udc66|\ud835\udc65) as the joint distribution \ud835\udc5d(\ud835\udc65,\ud835\udc66), thereby computing \ud835\udc5d(\ud835\udc65) and \ud835\udc5d(\ud835\udc65|\ud835\udc66) to resolve classification tasks. The Diffusion Classifier [17] simulates data distribution during noise addition and denoising processes, modeling \ud835\udc5d(\ud835\udc66|\ud835\udc65) for image classification by maximizing the Evidence Lower Bound (ELBO) of the log-likelihood [16]. Previous research has demonstrated the zero-shot classification ability of the Diffusion Classifier, while our work further showcases its adversarial robustness against adversarial attacks. 2.3 Fine-tuning of Stable Diffusion As a powerful and widely acclaimed Text-to-Image image generation model, the Stable Diffusion series [7, 33, 34] is often employed directly for tasks such as image classification and image generation. Moreover, fine-tuning the model parameters towards specific imagetext pairs for downstream tasks can yield enhanced performance. However, full-parameter fine-tuning of Stable Diffusion poses challenges such as computational resources constraints, time overhead, and potential catastrophic forgetting. In the domain of large language models, the Lora method [35\u201338] proposed for Transformer architectures [7] is suitable for application to Stable Diffusion. The LoRA method acknowledges that only a small subset of model parameters plays a significant role when targeting specific tasks. Consequently, it becomes feasible to notably diminish the number of training parameters by substituting the highdimensional parameter matrix with a low-dimensional decomposition matrix. If the size of the pre-trained parameter matrix is set to \ud835\udc51\u00d7 \ud835\udc51, it is then replaced with two matrices of size \ud835\udc51\u00d7 \ud835\udc5fand \ud835\udc5f\u00d7 \ud835\udc51 Struggle with Adversarial Defense? Try Diffusion ACM MM, 2024, Melbourne, Australia Figure 1: Simplified Illustration of Lora. Utilizing lowdimensional matrices to approximate high-dimensional ones, where pre-trained weights are frozen, and Lora tensors are employed for training. The memory require during training approaches that of the model\u2019s inference process. This configuration reduces both training time and memory overhead, while effectively mitigating catastrophic forgetting. (\ud835\udc51\u226b\ud835\udc5f), as illustrated in Figure 1. During LoRA fine-tuning, the pre-trained parameters are frozen while the LoRA module undergoes training. Upon completion of training, the Lora parameters are seamlessly integrated with the original parameters, thereby substantially reducing the number of parameters trained during fine-tuning without altering the original parameters. Fine-tuning Stable Diffusion using the LoRA method can drastically reduce training time and significantly alleviate memory requirements. 3 METHODS We adopt the method outlined in the Diffusion Classifier [17] to compute class conditional estimates of images utilizing a pre-trained Stable Diffusion model [34, 39], thereby constructing an image classifier based on the Diffusion Model for the task of image classification with adversarial perturbations. Subsequently, we propose an approach aimed at enhancing the adversarial robustness of the Diffusion Classifier. \u00a73.1 provides an overview of the Diffusion Model, while \u00a73.2 outlines the approach of leveraging the Diffusion Model for image classification tasks, with an elaboration on improving its adversarial robustness in \u00a73.3. 3.1 Diffusion Models Diffusion models [40] represent a class of discrete-time generative model based on Markov chains. The overall process of the model entails both forward noisy passage and backward denoising. Given an input \ud835\udc650, the model performs \ud835\udc47rounds of noise addition. Each round of noise addition, denoted as\ud835\udc5e(\ud835\udc65\ud835\udc61| \ud835\udc65\ud835\udc61\u22121), follows a Gaussian distribution, ultimately yielding \ud835\udc65\ud835\udc47\u223c\ud835\udc41(0, \ud835\udc3c). During the denoising process, the model learns the noise added in each round to denoise the image back to \ud835\udc650, optionally utilizing low-dimensional text embeddings \ud835\udc66for guidance. The denoising process can be represented as \ud835\udc5e(\ud835\udc65\ud835\udc61\u22121 | \ud835\udc65\ud835\udc61,\ud835\udc66). The entire process can be represented as follows: \ud835\udc5d\ud835\udf03(x0,\ud835\udc66) = \ud835\udc5d(x\ud835\udc47,\ud835\udc66) \ud835\udc47 \u00d6 \ud835\udc61=1 \ud835\udc5d\ud835\udf03(x\ud835\udc61\u22121 | x\ud835\udc61,\ud835\udc66) (2) Due to the presence of integrals, directly maximizing \ud835\udc5d(\ud835\udc650) poses significant challenges. Therefore, the objective is transformed into minimizing the ELBO of the log-likelihood value [16]. log\ud835\udc5d\ud835\udf03(\ud835\udc65,\ud835\udc66) \u2265\u2212E\ud835\udf50,\ud835\udc61 \u0002 \ud835\udc64\ud835\udc61\u2225\ud835\udf50\ud835\udf03(x\ud835\udc61,\ud835\udc61,\ud835\udc66) \u2212\ud835\udf50\u22252 2 \u0003 + \ud835\udc36 (3) We consider \ud835\udc65\ud835\udc61= \u221a\u00af \ud835\udefc\ud835\udc61\ud835\udc56\ud835\udc65+\u221a\ufe011 \u2212\u00af \ud835\udefc\ud835\udc61\ud835\udc56\ud835\udf16\ud835\udc56, E\ud835\udf50,\ud835\udc61 \u0002 \ud835\udc64\ud835\udc61\u2225\ud835\udf50\ud835\udf03(x\ud835\udc61,\ud835\udc61) \u2212\ud835\udf50\u22252 2 \u0003 refers to as diffusion loss in prior studies [41], and \ud835\udf16follows the standard normal distribution \ud835\udc41(0, \ud835\udc3c). Previous work has demonstrated that \ud835\udc36is typically a negligible value, which can be disregarded in practical computations [17, 42], and in practice, researchers [17, 40] often eliminate \ud835\udc4a\ud835\udc61to enhance model performance. Thus, we set \ud835\udc4a\ud835\udc61= 1. In this transformation, we parameterize the Gaussian noise added at each time step, enabling the neural network to predict the noise at each step during the backward denoising process. The Stable diffusion 2.0 we adopt allows for the selective addition of a text prompt, whose low-dimensional embeddings obtained after text encoding can serve as conditional guidance for the denoising process of the neural network. 3.2 Diffusion Classifier In contemporary computer vision literature, prevalent neural network architectures such as Convolutional Neural Networks (CNNs) [2, 43] and Transformer-based architectures [9, 44] typically adopt discriminative approaches for visual classification tasks. These approaches directly delineate the boundaries of different categories of image data through learning. Conversely, the Diffusion model falls within the realm of generative models. When employed as a classifier, it naturally necessitates the utilization of Bayesian theorem. Specifically, it involves calculating the posterior probability given labels \ud835\udc66and modeling of the data \ud835\udc5d(\ud835\udc65| \ud835\udc66): \ud835\udc5d\ud835\udf03(\ud835\udc66\ud835\udc56| \ud835\udc65) = \ud835\udc5d(\ud835\udc66\ud835\udc56) \ud835\udc5d\ud835\udf03(\ud835\udc65| \ud835\udc66\ud835\udc56) \u00cd \ud835\udc57\ud835\udc5d\u0000\ud835\udc66\ud835\udc57 \u0001 \ud835\udc5d\ud835\udf03 \u0000\ud835\udc65| \ud835\udc66\ud835\udc57 \u0001 (4) In the classification process, posterior probabilities corresponding to each class label are computed separately. Therefore, \ud835\udc5d(\ud835\udc66\ud835\udc56) is always equal to 1 \ud835\udc36(where \ud835\udc36represents the total number of classes), allowing for the elimination of \ud835\udc5d(\ud835\udc66\ud835\udc56) during calculations. \ud835\udc5d\ud835\udf03(\ud835\udc66\ud835\udc56| \ud835\udc65) = \ud835\udc5d\ud835\udf03(\ud835\udc65| \ud835\udc66\ud835\udc56) \u00cd \ud835\udc57\ud835\udc5d\ud835\udf03 \u0000\ud835\udc65| \ud835\udc66\ud835\udc57 \u0001 (5) Considering the computational difficulty of \ud835\udc5d\ud835\udf03(\ud835\udc65| \ud835\udc66\ud835\udc56), we substitute it with \ud835\udc59\ud835\udc5c\ud835\udc54(\ud835\udc5d(\ud835\udc65| \ud835\udc66\ud835\udc56)). Based upon the derivation of the Evidence Lower Bound, we combine Eq. 5 with Eq. 3 to deduce the formula for posterior probability for each class. \ud835\udc5d\ud835\udf03(\ud835\udc66\ud835\udc56| \ud835\udc65) = exp \b \u2212E\ud835\udc61,\ud835\udf16 \u0002 \u2225\ud835\udf16\u2212\ud835\udf16\ud835\udf03(\ud835\udc65\ud835\udc61,\ud835\udc66\ud835\udc56)\u22252\u0003\t \u00cd \ud835\udc57exp n \u2212E\ud835\udc61,\ud835\udf16 h\r \r\ud835\udf16\u2212\ud835\udf16\ud835\udf03 \u0000\ud835\udc65\ud835\udc61,\ud835\udc66\ud835\udc57 \u0001\r \r2io (6) ACM MM, 2024, Melbourne, Australia Anonymous Authors Figure 2: Overview of the Inference Process of the Diffusion Classifier. Perturbed images are fed into the Diffusion model for both forward noisy processing and backward denoising, with the guiding textual prompt also inputted into the model. The model computes the posterior probabilities corresponding to each class label using Bayes\u2019 theorem, and the maximum posterior probability corresponds to the inference result of the classifier. The objective of the inference process in classification can be transformed into selecting the class corresponding to the minimum average error between the noise inferred by the diffusion model at each sampling point and the ground truth value. Leveraging the Diffusion model, we can compute the \ud835\udf16for each \ud835\udc61\ud835\udc56 (with the default setting of \ud835\udc56\u2208[1, ..., 1000]). Consequently, we can derive unbiased Monte Carlo estimates of the expected value for each class, thus yielding the diffusion loss. \ud835\udc5a\ud835\udc52\ud835\udc4e\ud835\udc5b \r \r \r\ud835\udf16\ud835\udc56\u2212\ud835\udf16\ud835\udf03 \u0010\u221a\ufe01 \u00af \ud835\udefc\ud835\udc61\ud835\udc56\ud835\udc65+ \u221a\ufe01 1 \u2212\u00af \ud835\udefc\ud835\udc61\ud835\udc56\ud835\udf16\ud835\udc56,\ud835\udc66 \u0011\r \r \r 2 (7) Combining it with the aforementioned derivations, as depicted in Figure 2, construction of the generative classification model utilizing the diffusion model is achieved, building upon the work by Li et al. [17]. The work demonstrated the remarkable zero-shot performance of the Diffusion Classifier in open-domain classification scenarios without requiring training. In contrast, our study shifts focus towards its adversarial robustness, utilizing Stable Diffusion 2.0. We contend that it exhibits superior resilience against adversarial perturbations in images without requiring training, compared to other neural networks. 3.3 Robust Truth Maximization After conducting comparative experiments under various attacks, we have demonstrated the adversarial robustness of the Diffusion Classifier. Furthermore, we delve into strategies to enhance its robustness, aiming to contribute more to the research on robustness of classification models. To enhance the classifier\u2019s accuracy, according to Eq. 6, the model should be trained to minimize its diffusion loss, E\ud835\udf50,\ud835\udc61 \u0002 \ud835\udc64\ud835\udc61\u2225\ud835\udf50\ud835\udf03(x\ud835\udc61,\ud835\udc61) \u2212\ud835\udf50\u22252 2 \u0003 , when provided with the ground-truth class labels as input. This entails shifting the model\u2019s backward denoising inference values, guided by the true labels, towards the ground truth values. In order to enhance the robustness of diffusion model against adversarial attacks, we draw inspiration from the traditional adversarial training employed in vision classifiers [23]. While generative models cannot directly model the data boundaries between different classes during adversarial sample training, optimizing the model by inputting adversarial samples along with their ground-truth labels and minimizing diffusion loss can improve the model\u2019s capability to model samples augmented by adversarial attacks. Following Eq.6 and Eq.7, we define the training loss as: \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60= 1 \ud835\udc47 \ud835\udc47\u22121 \u2211\ufe01 \ud835\udc61=0 \u0002 \u2225\ud835\udf16\ud835\udf03(\ud835\udc61,\ud835\udc66\ud835\udc61\ud835\udc5f\ud835\udc62\ud835\udc52) \u2212\ud835\udf16(\ud835\udc61)\u22252\u0003 (8) Our work utilizes pre-trained Stable Diffusion 2.0 with approximately 354 million parameters. Performing full-parameter training would incur significant memory and time overheads, potentially compromising the pre-trained model\u2019s image modeling capabilities. Thus, we employ the Lora fine-tuning technique to mitigate this issue. By employing a decomposition method that approximates high-dimensional parameter matrices with low-dimensional matrices, we reduce the memory requirements for training to the level of model inference. The trained Lora module is then seamlessly merged with the pre-trained parameters to maintain the original modeling capabilities of the pre-trained model. During the training process, we input augmented samples \ud835\udc65from the training set along with their correct labels\ud835\udc66into Stable Diffusion. Then a pre-trained scheduler is employed for noise injection, and the model predict the noise at each time step, calculate the loss, and then minimize it. The model obtained through this approach is referred to as the Truth Maximized Diffusion Classifier (TMDC) by us. For a detailed outline of the classifier\u2019s training and inference process, please refer to Algorithm 1. 4 EXPERIMENTS We conducted a series of rigorous experiments, employing various black-box and white-box attack methods to assess the adversarial robustness of both the Diffusion Classifier and TMDC. Furthermore, we compared their performance with popular neural networks in the field of computer vision. \u00a74.1 elucidates the detailed experimental setup and training specifics of the models. \u00a74.2 showcases the results of the robustness study of the Diffusion Classifier under Struggle with Adversarial Defense? Try Diffusion ACM MM, 2024, Melbourne, Australia Algorithm 1 Truth Maximized Diffusion Classifier(TMDC) Notation: \ud835\udc4b: dataset, \ud835\udc41: data batch, \ud835\udc65: image, \ud835\udc66: ground-truth label, \ud835\udf16: model prediction, \ud835\udf0f: learning rate, \ud835\udc47: time step, \ud835\udc4a: weights of diffusion model, \ud835\udc3f: List of data class(car, truck, horse, ..., plane) Model Training 1: for \ud835\udc41\u2208\ud835\udc4bdo 2: \ud835\udc65,\ud835\udc66\u2190\u2212\ud835\udc41 3: for \ud835\udc61in \ud835\udc47do 4: \ud835\udf16(\ud835\udc61) \u2190\u2212\ud835\udc60\ud835\udc50\u210e\ud835\udc52\ud835\udc51\ud835\udc62\ud835\udc59\ud835\udc52\ud835\udc5f(\ud835\udc65,\ud835\udc61) 5: end for 6: for \ud835\udc61in \ud835\udc47do 7: \ud835\udf16\ud835\udf03(\ud835\udc61,\ud835\udc66) \u2190\u2212\ud835\udc5a\ud835\udc5c\ud835\udc51\ud835\udc52\ud835\udc59\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51\ud835\udc56\ud835\udc50\ud835\udc61(\ud835\udc65,\ud835\udc66,\ud835\udc61) 8: end for 9: \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60\u2190\u22121 \ud835\udc47 \u00cd\ud835\udc47\u22121 \ud835\udc61=0 \u0002 \u2225\ud835\udf16\ud835\udf03(\ud835\udc61,\ud835\udc66\ud835\udc61\ud835\udc5f\ud835\udc62\ud835\udc52) \u2212\ud835\udf16(\ud835\udc61)\u22252\u0003 10: \ud835\udc54\u2190\u2212\u2207\ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60 11: \ud835\udc4a\u2190\u2212\ud835\udc4a\u2212\ud835\udf0f\ud835\udc54 12: end for 13: return W Model Inference 1: for \ud835\udc41\u2208\ud835\udc4bdo 2: for \ud835\udc66\u2208\ud835\udc3fdo 3: \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60\ud835\udc3f\ud835\udc56\ud835\udc60\ud835\udc61[\ud835\udc66] \u2190\u2212\ud835\udc59\ud835\udc56\ud835\udc60\ud835\udc61() 4: for \ud835\udc61in \ud835\udc47do 5: \ud835\udf16(\ud835\udc61) \u2190\u2212\ud835\udc60\ud835\udc50\u210e\ud835\udc52\ud835\udc51\ud835\udc62\ud835\udc59\ud835\udc52\ud835\udc5f(\ud835\udc65,\ud835\udc61) 6: \ud835\udf16\ud835\udf03(\ud835\udc61,\ud835\udc66) \u2190\u2212\ud835\udc5a\ud835\udc5c\ud835\udc51\ud835\udc52\ud835\udc59\ud835\udc5d\ud835\udc5f\ud835\udc52\ud835\udc51\ud835\udc56\ud835\udc50\ud835\udc61(\ud835\udc65,\ud835\udc66,\ud835\udc61) 7: \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60\ud835\udc3f\ud835\udc56\ud835\udc60\ud835\udc61.\ud835\udc4e\ud835\udc5d\ud835\udc5d\ud835\udc52\ud835\udc5b\ud835\udc51 \u0010 \u2225\ud835\udf16\ud835\udf03(\ud835\udc61,\ud835\udc66) \u2212\ud835\udf16(\ud835\udc61)\u22252\u0011 8: end for 9: \ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60\ud835\udc3f\ud835\udc56\ud835\udc60\ud835\udc61[\ud835\udc66] \u2190\u2212\ud835\udc5a\ud835\udc52\ud835\udc4e\ud835\udc5b(\ud835\udc3f\ud835\udc5c\ud835\udc60\ud835\udc60\ud835\udc3f\ud835\udc56\ud835\udc60\ud835\udc61[\ud835\udc66]) 10: end for 11: \ud835\udc5f\ud835\udc52\ud835\udc60\ud835\udc62\ud835\udc59\ud835\udc61\u2190\u2212arg min \ud835\udc66\u2208\ud835\udc3f (Errors [\ud835\udc66]) 12: end for 13: return result several classical white-box attacks. \u00a74.3 presents the model\u2019s performance under Auto Attack, a widely recognized black-box and white-box combined attack method. Lastly, \u00a74.4 entails the ablation study of the TMDC method. 4.1 Experiment Settings Dataset: Considering the characteristics of the dataset and the time overhead incurred by attack algorithms and model training, we opt for CIFAR10 [45] to conduct our experiments. To assess the adversarial robustness of Diffusion Classifier and TMDC, inspired by the method of utilizing a subset of data for detection as proposed in DiffPure [26], and to eliminate testing randomness, we select 1024 data points from the CIFAR10 test set of 10,000 items for evaluation. Moreover, during the training process of the TMDC method, we endeavor to optimize Stable Diffusion 2.0 on the CIFAR10 training set. Practical Implementation Setup: In the naive implementation process of Algorithm 1 for model inference, it necessitates computing over all time steps for each class in the category list for classification. This inevitably imposes a heavy computational burden. Inspired by the upper confidence bound algorithm [45], it is possible to save computation by prematurely discarding class labels that significantly fail to meet classification requirements based on diffusion loss. When dealing with CIFAR10, we adhere to the setup proposed by Li et al. [17], where we initially compute losses for all labels over 50 time steps, discard the top 5 labels with the highest losses, and proceed with computations over 500 time steps for the remaining labels, thereby obtaining the final classification results. Training Setup: We conducted training of the diffusion model on a single A100 (80GB) GPU, with a batch size set to 4. We employed the AdamW optimizer with a learning rate of 1e-6, beta parameters set to (0.9, 0.999), weight decay of 1e-2, and epsilon set to 1e-8. Optimization was performed over 3,000 steps on the CIFAR10 training set, utilizing a constant with warmup learning rate scheduler with a warmup step of 100. For the final experimental evaluation, we selected the checkpoint after 200 steps of optimization, a configuration validated in the ablation study of \u00a74.4. 4.2 White-box Attack Robustness In this section, we employ two widely-used white-box attack algorithms to introduce adversarial perturbations to the test data, thereby evaluating the adversarial robustness of the Diffusion Classifier, shown in \u00a74.2.1. Additionally, we subject TMDC to attacks of the same intensity. In contrast, the remaining models undergo adversarial training for comparison, aiming to assess the effectiveness of our model optimization compared to the widely-used adversarial training on discriminative classifiers, as described in \u00a74.2.2. Adversarial Attacks: This experiment employs two white-box attack algorithms, namely FGSM [22] and PGD [23]. We use a pretrained ResNet50 model on CIFAR10 as the attack generator to introduce perturbations into the test data. The FGSM algorithm, contrary to the gradient descent method used in neural network training optimization, adds smooth perturbations, denoted as \ud835\udf16, along the direction of the gradient constrained by the \ud835\udc59\u221e\u2212\ud835\udc5b\ud835\udc5c\ud835\udc5f\ud835\udc5a to maximize the loss function, leading to misclassification by the model. The PGD attack method is an improved version of FGSM, performing multiple iterations based on single-step attacks under the \ud835\udc59\u221e\u2212\ud835\udc5b\ud835\udc5c\ud835\udc5f\ud835\udc5ato achieve better attack effectiveness. In our experiment, we set the \ud835\udf16parameter for FGSM and PGD attacks to 0.05, while the number of iterations for PGD attacks is set to 40. 4.2.1 Comparison of White-box Adversarial Robustness. We extracted 1024 samples from the CIFAR10 dataset and introduced adversarial perturbations using FGSM and PGD. Then, we utilized a rapid algorithm for staged label elimination to allow the classifier to infer predicted labels and calculate the model\u2019s accuracy under adversarial attacks, in conjunction with the ground-truth labels, for comparison with other popular neural networks trained on CIFAR10. The experimental results are presented in Table 1. Under the FGSM attack, the accuracy of ResNet50 dropped from 90.51% to 39.77%, Vit_B/16 from 98.10% to 23.69%, and WideResNet50 from 98.05% to 22.40%, all experiencing decreases of over 50%. Vit and WideResNet50 even dropped by over 75%. Conversely, the untrained Diffusion Classifier achieved an accuracy of 89.44% on the clean data, dropping to 50.17% under the FGSM attack, with a significantly lower decrease in accuracy compared to other models. Under the PGD attack algorithm with \ud835\udf16set to 0.05 and 40 iterations, the accuracy of all other models dropped to 0.0%, whereas that of ACM MM, 2024, Melbourne, Australia Anonymous Authors Table 1: Comparison of White-box Adversarial Robustness. ResNet50, Vit_B/16, and WideResNet50 were all trained on the CIFAR10 dataset, and then subjected to robustness testing on test data with adversarial perturbations. In contrast, the Diffusion Classifier was directly tested on the test set with added attacks. baselines Clean FGSM(\ud835\udf16= 0.05) PGD(\ud835\udf16= 0.05,\ud835\udc56\ud835\udc61\ud835\udc52\ud835\udc5f= 40) ResNet50 [6] 90.51% 39.77% 0.0% Vit_B/16 [9] 98.10% 23.69% 0.0% WideResNet50 [5] 98.05% 22.40% 0.0% Diffusion Classifier(OURS) 89.44% 50.17% 42.30% Table 2: Comparison between Truth Maximization and Adversarial Training. ResNet50, Vit_B/16, and WideResNet50 all underwent multiple rounds of adversarial training on data augmented with the PGD algorithm. Training was halted once the model\u2019s classification performance stabilized, after which robustness testing was conducted using the test set. In contrast, the Diffusion Classifier was subjected to robustness testing after optimization through Truth Maximization on the same augmented data. Baselines PGD(\ud835\udf16= 0.05,\ud835\udc56\ud835\udc61\ud835\udc52\ud835\udc5f= 40) ResNet50 45.39% Vit_B/16 39.72% WideResNet50 45.77% TMDC(OURS) 70.02% the Diffusion Classifier only decreased to 42.30%. These experimental results demonstrate the outstanding robustness of the Diffusion Classifier against white-box adversarial attacks when compared to other neural networks, even in an untrained state. 4.2.2 Comparison between Truth Maximization and Adversarial Training. We trained Stable Diffusion using data augmented with PGD adversarial perturbations, resulting in the model TMDC. Meanwhile, the other models underwent adversarial training using the PGD algorithm [23]. Then we conducted a comparative study of the robustness of each model on the test set under PGD attacks. The experimental results are presented in Table 2. After undergoing adversarial training, the accuracy of ResNet50 under PGD attacks increased from its original 0.0% to 45.39%, while Vit_B/16 rose to 39.72%, and WideResNet50 increased to 45.77%. This demonstrates that adversarial training can effectively enhance the adversarial robustness of widely used discriminative classifiers. Meanwhile, TMDC achieved an accuracy of 70.02% under the same adversarial attacks, significantly outperforming the commonly used adversarial training methods in enhancing model robustness. Thus, compared to discriminative classifiers, which can conveniently improve robustness through adversarial training, Diffusion Classifier as a generative classifier also possesses meaningful Truth Maximization optimization methods. 4.3 Auto Attack Robustness In this section, we employ the Auto Attack method to evaluate the adversarial robustness of both the Diffusion Classifier and TMDC under combined attacks. For rigorous conclusions, we compare them with discriminative classifiers and introduce the JEM generative classifier for comparative experiments. Furthermore, we incorporate another widely recognized approach for combating adversarial attacks, DiffPure, into our experiments. Adversarial Attack: Auto Attack [18] is a combined adversarial attack method that encompasses both black-box and white-box attacks. It improves upon PGD by employing the APGD algorithm, which automatically adjusts the step size. It rapidly moves with a larger step size and gradually reduces the step size to maximize the objective function locally. If the step size halving is detected, it restarts at the local maximum, thereby mounting more effective attacks against neural networks. APGD comes in different versions depending on the target loss function, and Auto Attack combines various versions of APGD along with Square attack (black-box) and FAB attack (white-box), forming a combination of black-box and white-box attacks. In this experiment, we set the version of Auto Attack to \u201cplus\u201d (a combination of all types of algorithms) and use both \ud835\udc592 and \ud835\udc59\u221enorms to constrain the perturbation. Stable Diffusion 2.0 is optimized using the Truth Maximization method on data augmented with Auto Attack. Subsequently, experiments are conducted on the test set augmented with the same attacks to assess its adversarial robustness against Auto Attack. The remaining comparative approaches are subjected to attacks using the same algorithm, with all groups utilizing adversarial samples generated by a pre-trained ResNet50 model on CIFAR10. The experimental results are presented in Table 3. Under the \ud835\udc59\u221enorm-constrained Auto Attack, the accuracy of WideResNet50 and Vit_B/16 on the test set plummeted to 0.0%, while that of ResNet50 dropped to 0.5%. However, after purifying the perturbations through DiffPure, the accuracy of ResNet50 reached 57.94%. Moreover, JEM achieved an accuracy of 10.13% under Auto Attack. In contrast, the Diffusion Classifier exhibited excellent robustness against this combination of black-box and white-box attacks, achieving an accuracy of 79.52% without any training. After optimization using the Truth Maximization method, TMDC\u2019s accuracy further increased to 82.81%. Meanwhile, under the \ud835\udc592 norm-constrained Auto Attack, the accuracy of WideResNet50 on the test set was 23.98%, while Vit_B/16 Struggle with Adversarial Defense? Try Diffusion ACM MM, 2024, Melbourne, Australia Table 3: Comparison of Auto Attack Robustness. Under both \ud835\udc3f\u221eand \ud835\udc3f2 norm constraints, Auto Attack was conducted with epsilon set to 0.05 and seed set to 2024. In the experiments involving DiffPure, the samples processed through DiffPure were re-fed into ResNet50, which was pre-trained on CIFAR10, for testing. And the ResNet50 model used in this experiment shares the same weight values as the models in the comparative experiment. baselines Auto Attack(\ud835\udc59\u221e\u2212\ud835\udc5b\ud835\udc5c\ud835\udc5f\ud835\udc5a) Auto Attack(\ud835\udc592 \u2212\ud835\udc5b\ud835\udc5c\ud835\udc5f\ud835\udc5a) DiffPure [26] 57.94% 75.34% JEM [32] 10.13% 26.56% WideResNet50 [5] 0.0% 23.98% Vit_B/16 [9] 0.0% 31.42% ResNet50 [6] 0.50% 37.52% Diffusion Classifier(OURS) 79.52% 81.18% TMDC(OURS) 82.81% 86.05% achieved 31.42%, and ResNet50 reached 37.52%. However, after purifying the perturbations through DiffPure, the accuracy of ResNet50 rose to 75.34%. JEM attained an accuracy of 26.56% under this norm of Auto Attack. The untrained Diffusion Classifier achieved an accuracy of 81.18%, while TMDC reached 86.05%. Under both \ud835\udc59\u221e and \ud835\udc592 norm-constrained Auto Attack scenarios, classifiers constructed from Stable Diffusion 2.0 demonstrated superior adversarial robustness compared to other comparative models. Furthermore, compared to the strategy of purifying the data and refeeding it into ResNet50 after DiffPure, the Diffusion Classifier also exhibited higher classification performance. 4.4 Ablation Study In order to validate the effectiveness of the Truth Maximization optimization approach employed in our work for the Diffusion Classifier, as well as the correctness of the selection settings for checkpoints during the training process, we conducted an ablation study. \u00a74.4.1 presents our investigation into Truth Maximization, while \u00a74.4.2 outlines our experiments concerning different checkpoint selection. 4.4.1 Ablation on Truth Maximization. In this experiment, we employ PGD(\ud835\udc56\ud835\udc61\ud835\udc52\ud835\udc5f= 40), Auto Attack (\ud835\udc59\u221e), and Auto Attack (\ud835\udc592) as three adversarial attack methods. For each attack, we randomly select 5 sets of different seeds to sample data from the test set. We evaluate the accuracy metrics of the Diffusion Classifier and TMDC in performing classification tasks under these attacks, and report the averaged test results. The experimental outcomes are illustrated in Figure 3. Under three types of adversarial attacks, the Truth Maximization approach consistently yields effective optimizations for the Diffusion Classifier. Specifically, under the PGD attack, models optimized through Truth Maximization demonstrate a substantial enhancement in average testing accuracy, soaring from 42.32% to 70.08%, marking an increase of 65.59%. Notably, the model\u2019s adversarial robustness under this attack type experiences significant improvement. Moreover, under the two norm-constrained Figure 3: Comparison between Diffusion Classifier and TMDC. The PGD attack is conducted with parameters set as follows: \ud835\udf16is set to 0.05, and the attack runs for 40 iterations, in accordance with Section 4.2. As for Auto Attack, its version is uniformly designated as \u201cplus\u201d, with \ud835\udf16set to 0.05 and the seed initialized with five sets of distinct random numbers. Auto Attack scenarios, the accuracy elevates from 79.11% (\ud835\udc59\u221e) and 81.19% (\ud835\udc592) to 82.79% and 86.13%, respectively, showcasing notable optimizations under combined attacks. Further corroborated by the experimental findings in \u00a74.2.2, TMDC consistently achieves a higher accuracy of 70.02% under PGD attack compared to other classification models using adversarial training. These experimental outcomes collectively underscore the efficacy of the Truth Maximization methodology in enhancing the adversarial robustness of the Diffusion Classifier. Furthermore, in contrast to adversarial training, applying Truth Maximization to diffusion models exhibits superior performance. ACM MM, 2024, Melbourne, Australia Anonymous Authors Figure 4: Study on Checkpoint Selection. For Auto Attack, the version is uniformly set to \u201cplus\u201d, with a value of 0.05 for parameter \ud835\udf16, and the seed is fixed at 2024. Throughout the Truth Maximization training process, a learning rate scheduler employing \u201cconstant with warmup\u201d strategy is employed, wherein the learning rate is set to 1e-6, and the warm-up steps are configured to be 100. Both sets of experiments undergo optimization for 3000 steps. 4.4.2 Ablation on Checkpoint Selection. In this experiment, we conducted trials under two norm-constrained Auto Attack scenarios. The selection of test dataset for both sets of experiments employed the same random seed. Moreover, all experiments underwent optimization using the Truth Maximization methodology for 3000 steps. During this optimization process, checkpoints were saved every 100 steps for the first 500 steps, followed by checkpoints saved every 1000 steps thereafter. The settings for optimizer and learning rate scheduler remained consistent with those outlined in \u00a74.1, ensuring the validity and coherence of the experimental setup. The experimental results are shown in Figure 4. Truth Maximization is a method employed to enhance the classification capability of the Diffusion Classifier by minimizing diffusion loss, which serves as the objective function, when training the model with both the ground-truth labels of the training set and perturbed images. This approach aims to strengthen the diffusion model\u2019s ability to model enhanced images conditioned on the correct labels. However, it does not directly improve the model\u2019s capacity to model boundaries between different data types. Therefore, we must consider the impact of optimization steps on the classifier\u2019s ultimate performance. As depicted in Figure 4, when checkpoints are saved every 100 steps, the model\u2019s accuracy on the test set reaches its peak around the 200th step checkpoint, with the test accuracy reaching 86.05% and 82.81% respectively, and gradually decreases thereafter. Furthermore, in the experimental group under \ud835\udc59\u221enorm constraints, the model\u2019s accuracy at the 3000th step is even lower than before optimization. After 200 steps of training, the model becomes overfitted to the training data, resulting in weakened classification performance due to blurred diffusion loss boundaries guided by different class labels in the test data. Consequently, after Truth Maximization training, we select the model from the 200th step checkpoint for subsequent testing to achieve relatively fair classifier performance. 5 DISCUSSION Collaboration with Purification: The generation of adversarial samples for image classification or adversarial training using diffusion model is subject to uncertainty stemming from shifts in image data distribution, rendering it vulnerable to high-intensity adaptive attacks. This vulnerability is partially attributed to constraints imposed by the performance of classifiers used after generating purified images. However, comparative experiments demonstrate that purification-based methods consistently outperform other baseline approaches. Therefore, anchored in the purification paradigm, developing a classifier based on diffusion models, that leverages the statistical uncertainty of data and utilizes different class posterior probabilities for classification, holds promise for bolstering adversarial resilience. Decoupling from Training: Despite achieving excellent adversarial robustness, our proposed TMDC method remains constrained by the necessity of training on adversarial samples, requiring a dedicated training set for the diffusion model, thereby posing inefficiencies in terms of computational resources and time. To mitigate these challenges, decoupling from training, segmenting the inference process of the diffusion model into multiple stages, and optimizing the sampling strategy offer a fertile ground for exploration. Such an approach not only enhances the model\u2019s classification performance under adversarial attacks but also improves inference efficiency, consequently conserving computational resources and time. 6 CONCLUSION In light of the widespread vulnerability observed in commonly used visual neural network classifiers when subjected to adversarial attacks, we conducted thorough assessments and found that the Diffusion Classifier, derived from a robust generative model, demonstrates excellent adversarial robustness. Utilizing the diffusion model as a conditional density estimator, we modeled image data guided by text prompts through the combination of Evidence Lower Bound (ELBO) and unbiased Monte Carlo estimation, leveraging Bayesian theorem to construct the classifier. Additionally, we propose a model optimization approach termed Truth Maximization, which, through training guided by ground-truth labels, further enhances the adversarial robustness of the pre-trained Stable Diffusion-based generative classifier. Models trained using this approach are denoted as Truth Maximization Diffusion Classifier(TMDC). Through empirical evaluation against classical whitebox attacks and widely employed strong combined adaptive attacks like Auto Attack, we demonstrated the exceptional adversarial robustness of the Diffusion Classifier even in the absence of explicit training. Moreover, the optimized TMDC model achieved state-ofthe-art performance against strong white-box attacks and combined adaptive attacks on the CIFAR-10 dataset."
+ },
+ {
+ "url": "http://arxiv.org/abs/2404.10487v1",
+ "title": "Early-time gamma-ray constraints on cosmic-ray acceleration in the core-collapse SN 2023ixf with the Fermi Large Area Telescope",
+ "abstract": "While SNRs have been considered the most relevant Galactic CR accelerators\nfor decades, CCSNe could accelerate particles during the earliest stages of\ntheir evolution and hence contribute to the CR energy budget in the Galaxy.\nSome SNRs have indeed been associated with TeV gamma-rays, yet proton\nacceleration efficiency during the early stages of an SN expansion remains\nmostly unconstrained. The multi-wavelength observation of SN 2023ixf, a Type II\nSN in the nearby galaxy M101, opens the possibility to constrain CR\nacceleration within a few days after the collapse of the RSG stellar\nprogenitor. With this work, we intend to provide a phenomenological,\nquasi-model-independent constraint on the CR acceleration efficiency during\nthis event at photon energies above 100 MeV. We performed a maximum-likelihood\nanalysis of gamma-ray data from the Fermi Large Area Telescope up to one month\nafter the SN explosion. We searched for high-energy emission from its expanding\nshock, and estimated the underlying hadronic CR energy reservoir assuming a\npower-law proton distribution consistent with standard diffusive shock\nacceleration. We do not find significant gamma-ray emission from SN 2023ixf.\nNonetheless, our non-detection provides the first limit on the energy\ntransferred to the population of hadronic CRs during the very early expansion\nof a CCSN. Under reasonable assumptions, our limits would imply a maximum\nefficiency on the CR acceleration of as low as 1%, which is inconsistent with\nthe common estimate of 10% in generic SNe. However, this result is highly\ndependent on the assumed geometry of the circumstellar medium, and could be\nrelaxed back to 10% by challenging spherical symmetry. A more sophisticated,\ninhomogeneous characterisation of the shock and the progenitor's environment is\nrequired before establishing whether or not Type II SNe are indeed efficient CR\naccelerators at early times.",
+ "authors": "G. Mart\u00ed-Devesa, C. C. Cheung, N. Di Lalla, M. Renaud, G. Principe, N. Omodei, F. Acero",
+ "published": "2024-04-16",
+ "updated": "2024-04-16",
+ "primary_cat": "astro-ph.HE",
+ "cats": [
+ "astro-ph.HE",
+ "astro-ph.GA",
+ "astro-ph.SR"
+ ],
+ "label": "Original Paper",
+ "paper_cat": "Diffusion AND Model",
+ "gt": "The origin of cosmic rays (CRs) is still an open issue, and the Galactic sources that can accelerate CRs up to the so-called knee of the CR spectrum at PeV energies have yet to be determined. In the standard paradigm (which includes both CR production and transport), the bulk of Galactic CRs would be produced in supernovae (SNe) and their remnants (SNRs; see Blasi 2013, for a review). In such a scenario, it is particularly relevant that SN events typically release an energy of \u223c1051 erg, meaning that, at their observed rate, a \u223c10% energy transfer into CRs is suffi- cient to explain the energetics of Galactic CRs. However, and among other issues (see e.g. Gabici et al. 2019), very-high energy (VHE; > 100 GeV) observations with Cherenkov telescopes consistently reveal multi-TeV cut-offs in most SNRs (see e.g. Acero et al. 2015; Ahnen et al. 2017b; H. E. S. S. Collaboration et al. 2018). That is, known Galac- tic SNRs do not appear to be significant contributors to the CR flux at PeV energies at their current evolutionary stage. This in- consistency with the standard paradigm can be alleviated if SNe accelerate CRs up to these energies at very early stages (i.e. within a few days after the event; V\u00f6lk & Biermann 1988; Tatis- cheff 2009; Bell et al. 2013; Schure & Bell 2013; Bykov et al. 2018; Marcowith et al. 2018; Cristofari et al. 2020; Inoue et al. 2021; Brose et al. 2022). However, testing this hypothesis re- mains challenging due to the low number of SNe detected at sufficiently short distances. Some studies have attempted to ob- tain such constraints before (Margutti et al. 2014; Ackermann et al. 2015; H. E. S. S. Collaboration et al. 2015; Ahnen et al. 2017a; H. E. S. S. Collaboration et al. 2019; Murase et al. 2019; Prokhorov et al. 2021), but obtaining effectual limits on the CR Article number, page 1 of 13 arXiv:2404.10487v1 [astro-ph.HE] 16 Apr 2024 A&A proofs: manuscript no. LAT_CR_SN2023ixf population at early times is inherently problematic. On one hand, TeV facilities can rapidly observe a SN within days after the out- burst, but \u03b3\u2013\u03b3 absorption heavily impacts the expected flux at those times (Marcowith et al. 2014). On the other hand, current GeV \u03b3-ray detectors require long exposure times (\u223c1 month) to reach noteworthy luminosity limits for distant events (Ack- ermann et al. 2015). Hence, previous efforts to estimate the CR energy reservoir during the expansion of SNe were impacted by such limitations. SN 2023ixf is a SN Type II (for a review see e.g. Smith 2014) recently discovered by Itagaki (2023) in the nearby M101 (D = 6.85 Mpc; Riess et al. 2022), with an estimated explo- sion time at T0 = 60082.743 \u00b1 0.083 MJD (Hiramatsu et al. 2023). Its progenitor was a red supergiant (RSG; see e.g. Jenc- son et al. 2023; Kilpatrick et al. 2023; Niu et al. 2023; Qin et al. 2023; Xiang et al. 2024), with an increased mass-loss rate during the last years before the outburst of \u02d9 MRSG = 10\u22123\u201310\u22122 M\u2299/yr (Bostroem et al. 2023; Hiramatsu et al. 2023; Jacobson-Gal\u00e1n et al. 2023; Teja et al. 2023). Together with a large shock and wind velocities inferred (Vs,0 \u2243104 km/s and uw \u2243100 km/s; Smith et al. 2023), this leads to the ideal conditions for particle acceleration and \u03b3-ray production through hadronic channels. Together with its proximity, \u03b3-ray observations of SN 2023ixf thus offer an unprecedented opportunity to explore CR accelera- tion in the very early expansion of the SN shock. Here we report a quasi-model-independent constraint on the CR acceleration in SN 2023ixf, providing an experimental test to the hypothesis that CCSNe efficiently accelerate protons at early times. In Section 2 we present the GeV \u03b3-ray observations and data analysis procedure, with the corresponding results detailed in Section 3. Through Section 4 we estimate the corresponding CR acceleration efficiencies, while the possible major biases in our assumptions are discussed in Section 5. Finally, we sum- marise our findings in Section 6.",
+ "main_content": "The Large Area Telescope (LAT) is a pair-production \u03b3-ray detector on board the Fermi satellite, launched in 2008 (Atwood et al. 2009). Operating in an all-sky survey mode, it detects \u03b3ray photons from 20 MeV to more than 500 GeV. As it observes the whole sky every \u223c3 h, Fermi-LAT is an ideal instrument to detect and follow-up on high-energy transient sources. The LAT point-spread function (PSF) is highly energydependent, being able to reconstruct the incoming direction of \u03b3-rays at 68% confidence level within 5\u25e6and 0.1\u25e6for 100 MeV and > 10 GeV events, respectively. We note that at even lower energies, the large PSF might lead to source confusion between point-like sources (for an alternative analysis implementation, see Principe et al. 2018). Therefore, for our standard analysis we select only P8R3 data (Atwood et al. 2013; Bruel et al. 2018) between 100 MeV and 500 GeV within a region of interest (ROI) of 10\u25e6\u00d7 10\u25e6centred at the optical position of SN 2023ixf (Right Ascension (RA) = 14:03:38.580, Declination (Dec) = +54:18:42.10; Itagaki 2023). To study the early expansion of the SN shock, we selected data from T0 up to 31 days after the outburst (mission elapsed time (MET); 706125000 \u2013 708803400). We applied a maximum zenith angle cut at 90\u25e6 to reduce Earth\u2013limb contamination while averaging over the azimuthal angle, and selecting only FRONT+BACK SOURCE events (evtype=3, evclass=128). In addition, a DATA_QUAL>0 and LAT_CONFIG==1 filter is applied. For its subsequent analysis, the ROI is then divided into 0.1\u25e6spatial bins, while using eight bins per logarithmic energy decade. 0 5 10 15 20 25 30 t T [d] 15 t T0 [d] 10 11 10 10 10 9 10 8 Energy flux [erg/cm2/s] Best photon triplet Optical-UV data Fermi-LAT limits Fig. 1. Fermi-LAT upper limits on the integrated energy flux above 100 MeV for different exposure times, compared with the best-fit optical energy flux from Zimmerman et al. (2024) in black. We note that observations for \u2206T = 7 d are less constraining than \u2206T = 5 d as a result of having a larger TS value, although they are still consistent with a statistical fluctuation (see Appendix A). The grey vertical line indicates the arrival time of the best photon triplet candidate. We performed a maximum-likelihood analysis on this dataset (Mattox et al. 1996) using fermipy and the fermitools (Wood et al. 2017; Fermi Science Support Development Team 2019). As our background model, we employed the third 4FGL data release (4FGL-DR3) \u2014which is based on 12 years of survey data (Abdollahi et al. 2020, 2022)\u2014 up to 2.5\u25e6beyond the edge of our ROI, as well as the latest Galactic and isotropic diffuse models (gll_iem_v07.fit and iso_P8R3_SOURCE_V3_v1.txt)1. To evaluate the detection significance of individual sources, we employed the test statistic (TS ) defined as TS = \u22122 ln (L0/L1) , (1) where L is the likelihood value for the null hypothesis and L \u2212 where L0 is the likelihood value for the null hypothesis and L1 the likelihood for the complete model. The TS follows a \u03c72 distribution, and the larger its value, the less likely the null hypothesis. As an example, for one degree of freedom, \u221a TS will approximately be the resulting significance in sigma (). esis. As an example, for one degree of freedom, \u221a TS will approximately be the resulting significance in sigma (\u03c3). First we adjust our background model to our ROI by means of the optimize function in fermipy, and refine it within its central region, fitting the normalisation parameters of all sources within 3\u25e6of SN 2023ixf (including the diffuse components). In a final step, we include an additional point-like source to account for any putative emission from the SN, for which we assume a simple power-law spectral model: dN\u03b3 dN\u03b3 dE = N0 \ufffdE E0 ral p E0 \ufffd\u2212\u0393 , (2) hoton index 2. In addition to our one-month \ufffd \ufffd with spectral photon index \u0393 = 2. In addition to our one-month analysis, the same analysis procedure is applied to exposure times of 1, 3, 5, 7, and 14 days2. In each data set, we correct for 1 https://fermi.gsfc.nasa.gov/ssc/data/access/lat/ BackgroundModels.html 2 BackgroundModels.html 2 These analyses complete our preliminary report in Marti-Devesa (2023). Thes (2023). Article number, page 2 of 13 G. Mart\u00ed-Devesa et al.: Early-time constraints on cosmic-ray acceleration in SN 2023ixf 14h20m 00m 13h40m 58\u00b0 56\u00b0 54\u00b0 52\u00b0 50\u00b0 RA DEC 4 2 0 2 4 TS 14h20m 00m 13h40m 58\u00b0 56\u00b0 54\u00b0 52\u00b0 50\u00b0 RA DEC 6 4 2 0 2 4 6 PS Fig. 2. Fermi-LAT smoothed residual map with a bicubic interpolation of our ROI centred on SN 2023ixf for 1 month of observations after T0. (Left): Standard significance map as implemented in fermipy (Wood et al. 2017). White crosses represent 4FGL sources included in the background model, while the yellow diamond is the nominal position of SN 2023ixf. (Right): Data\u2013model deviation estimate employing a PS map of the same ROI saturated at PS = 6.24 (equivalent to a 5\u03c3 threshold; Bruel 2021). energy dispersion for all sources, except for the isotropic diffuse emission, which does not require it. 3. Results Our analysis from Section 2 does not significantly detect SN 2023ixf; therefore, here we report the \u03b3-ray limits obtained. 3.1. Maximum-likelihood flux limits As no significant flux is obtained in any of the time windows explored, we provide 95% confidence upper limits on the integrated photon and energy fluxes above 100 MeV (Fig. 1), and derive spectral energy distributions with two logarithmic energy bins per decade. A complete summary of the limits for different times and energies is provided in Appendix A. For a distance D = 6.85 Mpc, this corresponds to a luminosity limit of L\u03b3(> 100 MeV) < 8.4 \u00d7 1040 \u2013 4.8 \u00d7 1041 erg/s, depending on the exposure time. This is further complemented with light curves derived with bins of 1, 3, and 7 days in size. In each bin, the normalisations of the five brightest sources in the ROI are first left free, and then the normalisation of SN 2023ixf is estimated. These analyses also do not report significant emission in any time bin, nor the presence of any flare at later times. The largest significance is obtained on the 3rd and 24th days after T0, with TS = 4.8 and TS = 4.9, respectively (\u223c2\u03c3, consistent with a statistical fluctuation). Similarly, no significant excess is found at other positions of the ROI. For completeness, the residual map obtained for the one-month analysis is shown in Fig. 2, and cross-checked with the p-value statistic (PS ) data\u2013 model deviation estimator developed by Bruel (2021). A \u223c3\u03c3 excess is appreciable at the left edge of the ROI with both methods, far away from the SN. Alternatively, we can also explore short timescales around the SN explosion. The position of SN 2023ixf was not visible by the LAT at t = T0 but entered the LAT field of view at T0 + 4.8 ks and remained observable until \u223cT0 + 8 ks. In analogy to what is commonly done for the analysis of \u03b3-ray bursts with the LAT, we also performed an unbinned maximumlikelihood analysis with gtburst3 over this time interval using P8R3 data with TRANSIENT10E event class. We selected photons with energies of between 100 MeV and 500 GeV within an ROI with a radius of 12\u25e6centred on the optical position of SN 2023ixf and with a maximum zenith angle of 100\u25e6. As in the previous analysis, we included the background contribution from sources of the 4FGL catalogue as well as the latest Galactic and isotropic diffuse models (gll_iem_v07.fit and iso_P8R3_TRANSIENT010E_V3_v1.txt). Once again, we find no significant detection at the SN location (TS = 2.0) and, assuming a power-law model with a photon index of \u0393 = 2, we set a 95% confidence upper limit on the integrated energy flux above 100 MeV of 5.2 \u00d7 10\u221210 erg cm\u22122 s\u22121. 3.2. Photon clusters search Given the lack of significant \u03b3-ray emission with the standard likelihood analysis (see Section 2), we searched for individual photons possibly connected to the SN event. To this aim, we applied an analysis method to search for photon triplets (for a detailed description see Fermi-LAT Collaboration et al. 2021), which has previously been adopted in searches for \u03b3-ray signals from magnetar flares and fast radio bursts (Fermi-LAT Collaboration et al. 2021; Principe et al. 2023). For our analysis, we selected all the SOURCE class photons between 100 MeV and 500 GeV detected by the LAT over 14.9 years (i.e. from the start of operations until June 30, 2023) in an ROI with a radius of 1\u25e6centred on the source position. We estimated the time interval \u2206ti for each triplet of photons i formed by three consecutive events, \u2206ti = ti+2 \u2212ti, (3) and corrected this quantity for the effect of bad time intervals by subtracting, from each \u2206ti, the period of time during which the ROI was not observable by the LAT. In addition to creating the distribution of the whole photon triplets coming from the source position, we investigated the first 3 https://fermi.gsfc.nasa.gov/ssc/data/analysis/ scitools/gtburst.html Article number, page 3 of 13 A&A proofs: manuscript no. LAT_CR_SN2023ixf Fig. 3. Triplet distribution. This is the distribution of the time intervals \u2206t of the photon triplets with (filled green) and without (red line) considering the correction of the bad intervals due to the LAT orbit and field of view. The expected distribution for independent events is shown as a black line. The vertical lines represent the first photon triplet after T0 (in cyan) and the shortest duration triplet in the two weeks after T0 (in blue), respectively. The latter is obtained from an analysis of a two-week interval after T0 in order to account for the uncertainty on the estimated T0, as well as for a possible delay in the \u03b3-ray emission from the SN. A significant triplet would appear in the left tail of the distribution. This is not the case for either of the highlighted triplets. triplet after T0 and its potential association to the SN event, as well as the shortest duration triplet in the two weeks after T0 for possible long-term emission. Similarly to the procedure used in Fermi-LAT Collaboration et al. (2021) and Principe et al. (2023), we used the likelihoodratio method defined in Li & Ma (1983) to estimate the probability that a cluster of three photons occurs by chance due to statistical fluctuations of the background, in the time range \u2206tSN = t\u03b3\u2212ray,3 \u2212TT0, where t\u03b3\u2212ray,3 is the time of the third photon of the triplet after T0. For further details on the derivation of significance of the photon triplets following T0, see Eqs. 4 and 5 in Principe et al. (2023). Figure 3 shows the distribution of the photon triplets. The first photon triplet after T0 is detected on 2023 May 18, at 20:45 UTC (about 3 hours after T0) and presents a duration of \u2206t \u223c181900 s. This triplet presents a pretrial p-value of smaller than 2\u03c3, indicating that it is most likely due to a statistical fluctuation. Considering the uncertainty on the estimated T0, as well as a possible delay on the \u03b3-ray emission from the SN, we also searched for the shortest duration photon triplet in the 14 days after T0. The shortest duration triplet (\u2206t = 14062 s) was observed at 9:26 UTC on 2023 May 24, more than five days after the SN event and past the optical peak (Fig. 1). In this case, even if we would expect a flash of \u03b3-ray emission from the SN at the time of the first photon of the triplet, the probability of these three photons to be associated to the SN is smaller than 2.4\u03c3. 4. Discussion We can firstly assess the relevance of the limits derived above by comparing them with non-thermal expectations from similar SNe. For this purpose, we briefly consider here the model from Tatischeff (2009) developed for SN 1993J in M81. This was also a nearby SN (\u223c3.4 Mpc; Kudritzki et al. 2012) with 5 10 15 20 25 30 t T0 [d] 10 14 10 13 10 12 10 11 10 10 10 9 Average Energy flux [erg/cm2/s] MRSG = 10 2 M /yr Radio-derived limit (Te = 105 K) Radio-derived limit (Te = 104 K) X-ray-derived density Fig. 4. Average integrated energy flux for SN 2023ixf as predicted by the SN 1993J model from Tatischeff (2009) compared with integrated energy flux upper limits from the Fermi-LAT (coloured markers). Fluxes and limits are integrated starting at T0 + 1 d. We assume uw = 100 km/s, in line with spectroscopic results (Smith et al. 2023). Radio/millimetre (230 GHz) lower limits from Berger et al. (2023) (uw = 115 km/s) for free-free absorption assuming different electron temperatures Te are also displayed. Finally, the prediction derived from X-ray absorption features in Grefenstette et al. (2023) (marginally consistent with millimetre-wavelength limits) is shown for completeness. a RSG progenitor. In Tatischeff (2009), the radio emission from the expanding shell is modelled by assuming diffusive shock acceleration (DSA) as the dominant acceleration mechanism, but also including non-linear effects. The differential \u03b3-ray flux expected for SN 2023ixf at early times would then be (Cristofari et al. 2020): dN\u03b3 dE = 3.5 \u00d7 10\u221211 \" D 6.85 Mpc #\u22122 \" \u02d9 MRSG 10\u22122 M\u2299/yr #2 \u0014 t 1 d \u0015\u22121 \u00d7 \" Vs 104 km/s #2 \" uw 100 km/s #\u22122 \u0014 E 1 TeV \u0015\u22122 TeV\u22121cm\u22122s\u22121. (4) Using the inferred values from multi-wavelength observations of SN 2023ixf (Appendix B), we can compute the expected average integrated energy flux between 100 MeV and 100 GeV for different exposure times after ti = T0+1 d. The shock velocity assumed (Vs = 104 km/s) is also typical for a Type II SN, and is consistent with the limits from observational constraints on SN 2023ixf (Bostroem et al. 2023; Jacobson-Gal\u00e1n et al. 2023; Teja et al. 2023). As we cannot directly compare the prediction with our observational limits as derived in Section 2, we repeat the analyses including data up to the same dates (tf = T0+ 3, 5, 7, 14, and 31 days), but starting at ti. In Fig. 4 we observe that the prediction lies well above our upper limits for a representative mass-loss rate inferred from optical observations. We note that the LAT limits are derived for a flat (\u0393 = 2) spectrum, while realistically a hadronic spectrum will be mildly different, most notably with a break at the lowest energies (the so-called \u03c00bump, which we consider in Section 4.1). Neglecting a realistic hadronic scenario in Fig. 4 leads to overestimation of the upper limits on the integrated energy flux by \u223c10%, thus not affecting our conclusion. Combined with the results from the Submillimeter Array (SMA) at 230 GHz (Berger et al. 2023), our limits exclude a Article number, page 4 of 13 G. Mart\u00ed-Devesa et al.: Early-time constraints on cosmic-ray acceleration in SN 2023ixf 10 1 101 103 105 Energy [GeV] 10 12 10 11 E2dN/dE [erg/cm2/s] Fig. 5. Fermi-LAT limits at 95% confidence level for the one-month time interval compared with the \u03c00 decay differential flux from a proton population with Ecutoff = 1 PeV (solid line), 10 TeV (dashed line), or 1 TeV (dotted line). These have the same normalisation N0, and a CR proton energy of ECR = 2.7 \u00d7 1046 erg, ECR = 1.8 \u00d7 1046 erg, and ECR = 1.5\u00d71046 erg, respectively. Limits at energies greater than 101.5 GeV lie above 10\u221211 erg/cm2/s. For the target material, a density \u03c10 = 5.6\u00d710\u221214 g/cm3 (Bostroem et al. 2023) is assumed. substantial fraction of the parameter space for the direct application of the underlying assumptions in the SN 1993J model to SN 2023ixf. Therefore, LAT observations are able to directly constrain the efficiency of the system at transferring kinetic energy of the expanding shock into high-energy CRs. For simplicity, we did not construct a complete model for the shock in SN 2023ixf, but instead estimated the CR energy reservoir directly assuming that protons can gain energy through DSA. This process leads to an average differential proton distribution that at a certain time t follows a power law with an exponential cut-off in momentum space; that is, dNp dE = \u03b2N0 E E0 !\u2212p exp \u2212 E Ecutoff ! , (5) where \u03b2 is the particle\u2019s velocity, which corrects the spectral shape of a proton population below \u223c1 GeV when described in kinetic energy (Dermer 2012). Standard DSA predicts that p = 2 (see e.g. Blasi 2013), and as our goal is to see if SNe can accelerate protons (at least) up to the knee of the CR spectrum, we fix Ecutoff to 1 PeV. Tentatively, this should be the most efficient acceleration mechanism at the shock. As most non-linear considerations would effectively soften the spectrum, we note that in order to maintain the same proton density at 1 PeV, the inclusion of those effects does increase the fluxes expected in the GeV band, making any constraints even more stringent. Hereafter, we also set a minimum proton kinetic energy of 100 MeV. 4.1. Limits on the total energy transferred into CRs The accelerated protons will interact with the gas in their immediate surroundings, leading to hadronic cascades and a subsequent \u03b3-ray production; for example, through \u03c00 decays. We can compute the expected \u03b3-ray spectral energy distribution (SED) for a proton population that follows the distribution in Eq. 5 for any arbitrary N0 given a target proton density (see Fig. 6. Density profile encountered by the shock as a function of time for different expansion parameters m (including m = 0.83 as in Tatischeff 2009, a model applied to SN 1993J). We assumed \u02d9 MRSG = 10\u22122 M\u2299/yr, uw = 100 km/s, and Vs,0 = 104 km/s. The average density \u03c10 obtained by Bostroem et al. (2023) is also displayed, as well as the best-fit density profile from the r1w6b model in Jacobson-Gal\u00e1n et al. (2023). The greyshaded region excludes the times not considered in our discussion on the SN energy budget. Fig. 5). For this purpose, we used the package naima (Zabalza (2015), which incorporates the cross-section \u03c3pp from Kafexhiu et al. 2014) to compute the SED for several proton populations. In particular, we computed \u03b3-ray fluxes expected from ten CR distributions that have total energies ECR of between 1045 and 1048 erg (uniformly distributed in log ECR). Furthermore, and given that the \u03b3-ray flux depends linearly on the target density, we normalised the results to the average density of \u03c10 = 5.6 \u00d7 10\u221214 g/cm3 as derived in Bostroem et al. (2023) for a compact hydrogen-rich, ionised medium surrounding the progenitor (see Fig. 6). We consider \u03c10 to be a conservative reference value, as Zimmerman et al. (2024) report a local density of \u03c1 \u223c5 \u00d7 10\u221213 g/cm3. Assuming a proton spectrum with p > 2 increases the flux at lower energies (0.1\u20131 GeV) for any arbitrary ECR normalisation, and hence limits derived for p = 2 should be considered conservative. Those predictions do not consider absorption processes, as they are likely negligible at MeV and GeV energies (see Section 5.2). Our fitted background models in Section 2 can now be modified to include an additional point-like source at the nominal optical position of SN 2023ixf, but whose SED is that computed through hadronic interactions instead of a simple power law. Therefore, we can then calculate the likelihood for each precomputed CR distribution, and obtain a likelihood profile as a function of ECR normalised at \u03c10 for every exposure time (Fig. 7). The larger ECR, the larger the flux, and if it exceeds the sensitivity limit of the LAT, the difference between the newly computed likelihood and its maximum value from our analysis will increase. This allows us to compute 95% confidence upper limits (corresponding to a TS = 2.706 for a one-sided \u03c72 distribution; see e.g. Rolke et al. 2005) assuming a target density of \u03c10. Nevertheless, a realistic density profile is not flat, but will likely follow the wind density of the progenitor (see e.g. Tatischeff 2009). This can be firstly approximated as \u03c1w(r) = \u02d9 MRSG 4\u03c0r2uw . (6) Article number, page 5 of 13 A&A proofs: manuscript no. LAT_CR_SN2023ixf 45.0 45.5 46.0 46.5 47.0 47.5 48.0 log(ECR ( 0 ) 1 [erg]) 0 2 4 6 8 10 2 L [T0, T0 + 3 d] [T0, T0 + 5 d] [T0, T0 + 7 d] [T0, T0 + 14 d] [T0, T0 + 31 d] 45.0 45.5 46.0 46.5 47.0 47.5 48.0 log(ECR ( 0 ) 1 [erg]) 0 2 4 6 8 10 2 L [T0 + 1 d, T0 + 3 d] [T0 + 1 d, T0 + 5 d] [T0 + 1 d, T0 + 7 d] [T0 + 1 d, T0 + 14 d] [T0 + 1 d, T0 + 31 d] Fig. 7. Likelihood profiles assuming a flat \u03c10 density profile for different exposure times \u2206T, starting at either T0 (left) or T0 + 1 d (right). \u2206L is defined as the difference between the likelihood and that corresponding to the ECR value which maximises it unconditionally (black dashed line). In all panels, the grey dashed line represents the 95% limit, and the obtained likelihoods are interpolated employing a cubic spline. 45.5 46.0 46.5 47.0 47.5 log(ECR [erg]) 0 2 4 6 8 10 2 L [T0 + 1 d, T0 + 3 d] [T0 + 1 d, T0 + 5 d] [T0 + 1 d, T0 + 7 d] [T0 + 1 d, T0 + 14 d] [T0 + 1 d, T0 + 31 d] 5 10 15 20 25 30 35 t T0 [d] 1044 1045 1046 1047 1048 ECR [erg] = 0.1 = 0.01 = 0.001 Fig. 8. Limits for a steady-wind profile. (Left): Likelihood profiles for different exposures considering the average density derived from a \u03c1w steadywind profile with m = 1, rescaling Fig. 7, right panel. (Right): Limits on the average cumulated total CR energy \u27e8ECR\u27e9for different exposure times \u2206T. \u27e8ECR\u27e9is computed for different efficiencies \u03b7 and velocity profiles with m = 1 (solid line), m = 0.83 (dashed line, as for SN 1993J), and m = 0.5 (dotted line) using \u02d9 MRSG = 10\u22122 M\u2299/yr and uw = 100 km/s. For visualisation purposes, Fermi-LAT limits are only plotted for m = 1. Colours represent the same exposures as in Fig. 7. The SN shock will travel through this medium, and its radius Rs(t) is typically parametrised during the free-expansion phase using the expansion parameter m as Rs(t) = Vs,0 \" t 1day #m , (7) where Vs,0 is the initial shock velocity. The resulting onedimensional density profiles for different m values are shown in Fig. 6. Using those, we can rescale our likelihood limits, providing more realistic results, by using the average gas density for each exposure. This will lead to more stringent ECR constraints at earlier times due to the significantly larger densities closer to the stellar surface of the progenitor (see Fig. 8, left panel). However, we note that such a steady-wind density profile diverges at T0, and therefore we only employ our limits integrating LAT data after t0 = T0 + 1 d. The energy-conversion efficiency can be finally parametrised through the parameter \u03b7 in the cumulative energy ECR (i.e. as a fraction of the shock\u2019s kinetic energy), which is estimated as ECR = \u03b7 Z t f ti 1 2\u03c1wV3 s 4\u03c0R2 sdt , (8) where Vs = dRs dt = Vs,0 h t 1day im\u22121. We can compare our observations with the average cumulative ECR over the different observing windows discussed in Section 2, leading to an efficiency constraint at \u03b7 \u22721% (Fig. 8, right panel). This efficiency limit is, in itself, a strong statement. Nevertheless, it depends on (1) our assumed isotropic one-dimensional density profile, (2) a lack of \u03b3-ray absorption, and (3) the presence of an average proton population accurately representing the true time evolution of the CRs for certain exposures. The reliability of these aspects is further discussed in Section 5. Article number, page 6 of 13 G. Mart\u00ed-Devesa et al.: Early-time constraints on cosmic-ray acceleration in SN 2023ixf 4.2. Galactic novae and their acceleration efficiency In order to explore the implications of our limits, we can compare the acceleration efficiency of SN with that of Galactic novae. The majority of novae detected at \u03b3-rays originate in binary systems without an evolved stellar companion (i.e. classical novae; see e.g. Ackermann et al. 2014; Chomiuk et al. 2021), and thus the surrounding conditions more closely resemble those of Type Ia SNe. In those classical novae, shocks are found to be internal, displaying a strong correlation between optical and \u03b3-ray flares (Aydi et al. 2020). The same behaviour arises in radiative shocks, where radiative cooling dominates the evolution of the ejecta. In such cases, the ratio between the optical and \u03b3-ray luminosities should follow the relation (see e.g. Metzger et al. 2015) L\u03b3 Lopt = \u03b7p \u03b7\u03b3 , (9) where \u03b7p and \u03b7\u03b3 are the particle acceleration and \u03b3-ray production efficiencies, respectively. If the shock is not fully radiative, the correlation instead provides a lower limit on the acceleration efficiency. Despite the more energetic and rapidly expanding shock in SN 2023ixf, the environment in Type II SNe should be relatively similar to that in symbiotic novae, where an adiabatic shock travels through the wind of an RSG companion (V407 Cyg and RS Oph are the only unambiguously \u03b3-ray-detected symbiotic binaries; Abdo et al. 2010; Cheung et al. 2022). Naturally, this analogy has its limitations, because the novae ejecta can be bipolar and the surrounding material inhomogeneous (Munari et al. 2022; Diesing et al. 2023). For our discussion, the case of RS Oph is particularly interesting as a relatively high efficiency (\u223c10%) was required to explain the VHE emission in the favoured hadronic scenarios (see e.g. Acciari et al. 2022; H. E. S. S. Collaboration et al. 2022). We also note, that in this system, both external and internal shocks contribute to the emission (Cheung et al. 2022), with a strong correlation between the optical and \u03b3-ray luminosities of L\u03b3/Lopt \u223c2.5 \u00d7 10\u22123 starting the day after the explosion. This ratio lies within the 10\u22124\u201310\u22122 range of classical novae (Chomiuk et al. 2021). This is in stark contrast to the low efficiency we derive in SN 2023ixf. Furthermore, we find that the luminosity ratio is of the same order (\u22721%; Fig. 9) as our spectral constraints on the efficiency. This provides an additional limit in case of the presence of a radiative shock. Either way, the relative weight of the nonthermal processes and particle acceleration in SN 2023ixf over its thermal electromagnetic output does not considerably exceed that of regular novae. 4.3. Testing the shock breakout in \u03b3-rays In the previous discussion, we centre our attention at t > 1 d for a shock propagating through the surrounding medium. However, our limits during the first day also provide a constraint on CR acceleration prior or close to the shock breakout, possibly characterised by a flash of UV/X-ray photons (e.g. see the case of SN 2008D; Chevalier & Fransson 2008; Mazzali et al. 2008; Soderberg et al. 2008). This will occur at an optical depth of \u03c4 \u223cc/Vs, when a radiation-dominated shock travelling through the progenitor reaches the outer layers of the stellar component (see Waxman & Katz 2017, for a review). For a progenitor with an optically thin wind, the radiation-dominated shock precedes 1042 1043 LUBVRI [erg/s] 0 10 20 30 40 50 t T0 [d] 10 3 10 2 L /Lopt RS Oph SN 2023ixf Fig. 9. Luminosity of SN 2023ixf in the optical and GeV bands. (Top): Example of a pseudobolometric luminosity (LUBVRI) light-curve model of SN 2023ixf from Fig. 4 of Hiramatsu et al. (2023). We note that this is a conservative reference, as it is slightly lower than the bolometric luminosity derived by Bostroem et al. (2023), Teja et al. (2023), or Zimmerman et al. (2024), reaching 1043 erg/s. (Bottom): Limits on the luminosity ratio between the optical and \u03b3-ray bands. For comparison, the luminosity ratio from the symbiotic nova RS Oph is shown in grey (Cheung et al. 2022). Colours represent the same exposure times as in Fig. 1. the formation of the collision-less, matter-dominated shock after the breakout. However, for dense, optically thick winds, the matter-dominated shock may form earlier than the shock breakout as photons are seized downstream up to a larger radii. In such a scenario, particle acceleration can occur as the collision-less shock forms deep into the wind if the condition Vs \u22724.3 \u00d7 104 \" RRSG 410 R\u2299 # \" \u02d9 MRSG 10\u22122 M\u2299/yr #\u22121 \" uw 100 km/s # km/s (10) is fulfilled (Giacinti & Bell 2015) (we assume the RSG radius derived by Hosseinzadeh et al. 2023). Therefore, for Vs = 104 km/s, CR acceleration could occur prior to the shock breakout up to several TeV. Although \u03b3-rays produced through \u03c00 decay could be partially reprocessed within the shock into Article number, page 7 of 13 A&A proofs: manuscript no. LAT_CR_SN2023ixf electron\u2013positron pairs, GeV photons will likely not be absorbed (see Section 5.2). Li et al. (2024) suggest that, for SN 2023ixf, the shock breakout might have occurred within a few hours after T0. Our null result in the search for photon clusters in Section 3.2 discards the presence of a putative short, bright flash emission in the \u03b3-ray domain during the early expansion of the shock. In a complementary manner, our limit at L\u03b3(E > 100 MeV) < 4.8 \u00d7 1041 erg/s for the first day of exposure can also be used to constrain the number of CRs accelerated at the time of the shock breakout. 5. Possible biases Although limits on the CR energy fraction at 10% would be consistent with the standard Galactic CR origin paradigm, the new limits derived down to 1% considerably quench the efficiency of SNe, and thus need to be considered with manifest scepticism. These limits were derived with reasonable but simplified assumptions, which may or may not apply. For example, the previously derived CR limit assumes an isotropic density profile with a constant wind before the explosion. Other assumptions affecting the underlying proton distribution, such as the presence or not of non-linear DSA, are likely to increase the photon flux at GeV in order to make SNe substantially contribute to the knee of the CR spectrum. For example, magnetic field amplification, which is required to retain CRs up to Ep \u223c1 PeV at early times, will soften the proton spectrum (for a review see e.g. Schure et al. 2012; Blasi 2013). Furthermore, it should be noted that our spectral predictions were made assuming quasi-steady-state equilibria at different times, which may not accurately represent the time evolution of the relativistic particle population during the shock expansion (see e.g. Sarmah 2023). Among the possible biases, we further discuss the two relevant processes that, if wrongly interpreted, might underestimate the efficiency by orders of magnitude: uncertainties on the surrounding densities and \u03b3-ray absorption processes. Additionally, we briefly discuss the timescales of the most relevant CR energy losses. 5.1. The issue of the density profile and its homogeneity Generally, a density profile surrounding the progenitor can be described by a function of the form \u03c1w = \u03c1c r rc !\u2212s , (11) where a radially decreasing density has a characteristic value of \u03c1c at a distance rc. Circumstellar medium (CSM) in the form of a steady wind will lead to s = 2 (see Eq. 6), but larger values should be obtained for variable mass-loss rates. Optical observations consistently show that the progenitor was surrounded by a high-density CSM of r < (0.3 \u2013 1.0) \u00d7 1015 cm (Smith et al. 2023; Teja et al. 2023; Bostroem et al. 2023; Jacobson-Gal\u00e1n et al. 2023; Zimmerman et al. 2024). This roughly corresponds to the distance that the progenitor\u2019s wind could reach in \u223c1 yr, while a m = 1 shock travelling at 104 km/s would need between one and two weeks to cross that high-density environment. At larger distances, the density is traced by the mass-loss rate of the stellar wind during the earlier pre-SN stages (likely closer to the typical value of \u02d9 MRSG = 10\u22126 M\u2299/yr). That is, there would be a transition with a steeper density profile s > 2, which is not characterised by our assumptions in Fig. 8. However, conservatively, our early limits during this first week should be robust. Importantly, these limits are still highly dependent on two observationally constrained quantities: \u02d9 MRSG and uw. In Section 4, we assumed \u02d9 MRSG = 10\u22122 M\u2299/yr and uw = 100 km/s. For ease of discussion, we define here the ratio \u03c9 = \u02d9 MRSG/uw, and hence \u03c9 = 6.3 \u00d7 1016 g/cm. Our assumption for \u03c9 is consistent with the reported results from spectroscopic observations of ionisation lines and multi-band photometric light curves (see Appendix B for a summary of multi-wavelength \u03c9 constraints). To explore the impact of those aspects, we reproduce the limits on Fig. 8 but now testing the best-fit density model from Jacobson-Gal\u00e1n et al. (2023), which characterises the compact CSM (see Fig. 6). As shown in Fig. 10, assuming the r1w6b model from Jacobson-Gal\u00e1n et al. (2023) would imply an even stricter limit on the particle acceleration efficiency. Instead, we could consider a steady-wind profile with the lower limit on \u03c9 derived from SMA radio/mm observations (230 GHz, between T0 + 2 d and T0 + 19 d) by Berger et al. (2023). Employing their most conservative limit when considering free-free absorption (Weiler et al. 1986; Chevalier 1998) could relax the tension on the efficiency constraint, leading to a limit at \u03b7 \u223c16% (see Fig. 4). However, this would be in tension with the results from optical spectroscopic and photometric observations, as larger \u03c9 ratios are preferred. Nevertheless, the exact value of the mass-loss rate is uncertain. A lower \u03c9 ratio has been reported from the modelling of the H\u03b1 luminosity (Zhang et al. 2023) or in absorption features from X-ray observations (Grefenstette et al. 2023; Chandra et al. 2024; Panjkov et al. 2023). To explain the apparent tension with the ratios derived from different methods, it has been suggested that the CSM could be inhomogeneous (Berger et al. 2023). This scenario could be explained by (1) an asymmetric distribution of the surrounding material, such as a dense torus caused by a pre-SN binary interaction, as proposed by Smith et al. (2023) (with asphericity in the ejecta also supported by the polarisation measurements from Vasylyev et al. 2023), or (2) a pre-SN effervescent zone model with dense clumps embedded in a lighter RSG stellar wind (e.g. with \u03c1clump \u223c3000\u03c1w), as proposed by Soker (2023). In both scenarios, our \u03b3-ray constraints on the CR population could also be relaxed with the presence of an inhomogenoeus medium. That is, if the target gas \u2014with a density as derived from optical observations\u2014 were to only occupy a volume filling factor fV \u223c0.1, an efficiency of 10% in the CR acceleration would, a priori, still be compatible with the \u03b3-ray constraints. 5.2. The relevance of \u03b3-ray absorption Although it appears that the uncertainties in the material distribution might explain our results, we can also consider the possibility that a significant fraction of the \u03b3-ray flux is locally absorbed. Pair-production of electron\u2013positron pairs can attenuate the high-energy photon flux at the source, while injecting a relativistic leptonic population into the shock. The \u03b3 + \u03b3 \u2192e\u2212+ e+ channel is likely to be irrelevant in our case, as the densest photon field produced by the SN will peak at UV and optical wavelengths; thus, the energy threshold for the interaction will be \u2273100 GeV (see e.g. Dermer & Menon 2009). Contrarily, Bethe-Heitler (BH) pair-production (e.g. p + \u03b3 \u2192p + e\u2212+ e+) could have a larger impact for a hydrogen-poor, high-metallicity CSM (Fang et al. 2020). A priori, both primary CRs and secondary \u03b3-rays could be affected by this process. We note that relativistic protons would only interact with the SN optical photon field for Ep > 1 PeV (Cristofari Article number, page 8 of 13 G. Mart\u00ed-Devesa et al.: Early-time constraints on cosmic-ray acceleration in SN 2023ixf 45.0 45.5 46.0 46.5 47.0 47.5 log(ECR [erg]) 0 2 4 6 8 10 2 L [T0 + 1 d, T0 + 3 d] [T0 + 1 d, T0 + 5 d] [T0 + 1 d, T0 + 7 d] [T0 + 1 d, T0 + 14 d] [T0 + 1 d, T0 + 31 d] 5 10 15 20 25 30 35 t T0 [d] 1044 1045 1046 1047 1048 ECR [erg] = 0.1 = 0.01 = 0.001 = 0.0001 Fig. 10. Limits using the best-fit density profile from Jacobson-Gal\u00e1n et al. (2023) shown in Fig. 6 (r1w6b model). (Left): Likelihood profile obtained by rescaling those displayed in Fig. 7, right panel. (Right): Limits on the cumulated total CR energy derived as in Fig. 8 but employing the aforementioned density profile. et al. 2020, for a shock temperature T \u223c104 \u2013 105 K), and therefore our proton spectrum should remain unaltered, while secondary GeV \u03b3-rays might still interact even with thermal plasma nuclei. However, the medium surrounding a SN Type II is hydrogen rich, and consequently BH should also have a negligible impact on the observed SED below 100 GeV (\u03c4\u03b3 \u22721; threshold estimated at the time of the optical peak following Fang et al. 2020, and references therein). Nonetheless, we can attempt to estimate the consequences of a putative substantial pair-production, given that secondary relativistic pairs also radiate through the inverse Compton (IC), synchrotron, and non-thermal bremsstrahlung processes (see e.g. Blumenthal & Gould 1970; Baring et al. 1999). If there is a suppression of 90% of the \u03b3-ray flux, the leptonic emission from the pairs should not exceed the X-ray flux associated with thermal bremsstrahlung (\u223c2.5 \u00d7 10\u221212 erg/cm2/s between 0.3 and 79 keV; Grefenstette et al. 2023). As a first approximation, we assume that all the energy from the \u03b3-ray photons is converted into pairs. We note that under this assumption their energy content will exceed that of the secondary electrons and positrons from meson decays (e.g. \u03c0\u00b1) in hadronic showers (which would have a ratio of Ne\u00b1/N\u03b3 = 0.53 for energies larger than the rest mass of the \u03c00; Kelner et al. 2006). Under these assumptions (and also considering a uniform magnetic field with B < 10 G), our estimated secondary X-ray fluxes from IC, synchrotron, and bremsstrahlung processes lie well below the measured thermal X-ray emission at early times. Therefore, although our theoretical estimates rule out absorption as an important actor in SN 2023ixf, the presence of secondary emission remains formally unconstrained by current X-ray and \u03b3-ray observations. 5.3. The impact of proton energy losses and CR escape Here we discuss if our assumption of neglecting the CR evolution \u2014in order to provide quasi-model independent limits\u2014 was valid. To this end, we computed: (1) the acceleration time tacc, (2) the escaping time tesc, (3) the timescale of the protonproton interactions tpp, and (4) the time when adiabatic losses dominate the CR energy gain tad,th. In our derivation, we adopt the assumptions considered in Vink (2020) for a free expanding shock (m = 1). For the upstream reference magnetic field we considered a value of B = 10 G, which is rather conservative at early times, as its strength will likely decay in proportion to t\u22121 (see e.g. Tatischeff 2009). First of all, we estimated the time required for particles to accelerate up to Ep \u223c1 PeV via DSA. Assuming that the effective mean free path of the protons is effectively as small as its gyroradius (i.e. Bohm diffusion) and that the compression ratio at the shock is \u03c7 = 4 with a magnetic field perpendicular to its normal, tacc \u223c0.9 \u03b4 \" Ep 1 PeV # \" Vs 104 km/s #\u22122 \u0014 B 10 G \u0015\u22121 d, (12) where \u03b4 parametrises the energy dependence of the CR diffusion (realistically, \u03b4 \u22721). As previously mentioned, magnetic field amplification is necessary in order to accelerate CRs up to Ep \u223c1 PeV at early times (Schure et al. 2012). This will also impact the leakage of CRs escaping the shock, for which Bohm diffusion will cause particles with Ep \u22732.6 \u0014 B 10 G \u0015 \" Vs 104 km/s #2 \u0014 t 1 d \u0015 PeV (13) to escape the shock region. Here we assume that the diffusion length is a small fraction of the shock radius (ldiff \u22720.1Rs). Therefore, after one day, the SN shock should be able to retain CRs up to the knee of the CR spectrum for such high magnetic fields. However, for CR acceleration to be effective, we need the acceleration to occur on similar or shorter timescales than proton energy losses; that is, tacc \u2272tpp and tacc \u2272tad,th. For proton\u2013 proton interactions, we simply have tpp = \u27e8m\u27e9 \u03c3pp\u03c1w(t)c , (14) where \u27e8m\u27e9is the average mass for the particles in the CSM. For a hydrogen-rich environment with \u03c1w = \u03c10, this implies tpp \u223c 0.4 d, which is similar to tacc. For this energy loss, only a small fraction of the CR energy is actually transferred to \u03b3-rays (\u223c Article number, page 9 of 13 A&A proofs: manuscript no. LAT_CR_SN2023ixf 10 3 10 2 10 1 100 101 102 103 Ep [TeV] 10 3 10 2 10 1 100 101 102 Epp/EDSA B = 10 G B = 30 G B = 100 G 5 10 15 20 25 30 t T0 [d] 10 1 100 101 102 Emax [TeV] B = 10 G B = 30 G B = 100 G Fig. 11. Impact of proton\u2013proton losses on the maximum energy. (Left): Ratio \u02d9 Epp/ \u02d9 EDSA as a function of the proton energy for \u03c10. (Right): Maximum CR energy as limited by proton\u2013proton interactions for a flat density profile with density (\u03c10, black) or a steady-wind profile (Eq. 6; red), for different magnetic fields. Emax is derived by imposing \u02d9 EDSA = \u02d9 Epp at the Bohm limit, and employing the parametrisation from Krakau & Schlickeiser (2015) for the proton\u2013proton losses. 5%), and it is therefore also similar to the DSA energy gain for a single scattering cycle (\u2206Ep/Ep = 4 3 V c \u223c4.4%). Furthermore, we note that despite \u03c3pp being quasi-energy independent at low energies, it does increase by a factor 2 at 1 PeV (see e.g. Kafexhiu et al. 2014). Consequently, depending on the density, this energy loss could quench acceleration up to 1 PeV, but should not substantially affect protons with energies of less than a few TeV (see Fig. 11), which still radiate in the GeV band. This in turn will only mildly impact the overall ECR as, for example, an induced cut-off at 1 or 10 TeV reduces the total proton energy content by a factor of \u223c2 (see Fig. 5). Such a cut-off would be physically determined by the balance between the DSA energy gain ( \u02d9 EDSA) and the interaction losses ( \u02d9 Epp), while the overall normalisation of the proton population is instead set by the continuous CR injection at the shock. This, in turn, depends on the CSM geometry, which is the main source of uncertainty. Finally, the free expansion of the shock will also lead to an adiabatic cooling of the CR population. Those energy losses will dominate over the DSA energy gain for any t lower than tad,th \u223c0.9 \u03b42 \" Ep 1 PeV # \" Vs 104 km/s #\u22122 \u0014 B 10 G \u0015\u22121 d , (15) where we assume an adiabatic index of \u03b3 = 4/3 for the relativistic CRs. Consequently, adiabatic losses do not dominate the time evolution of the protons either after one day of the SN explosion. In view of those estimates and considering that we only integrate after t = 1 d and neglect any possible prior larger emissivity, we do not expect a temporal-dependent description of the CR population to impact the derivation of our limits by an order of magnitude or more \u2014provided the underlying ideal shock conditions for effective particle acceleration hold. 6. Summary In the present study, we searched for \u03b3-ray emission above 100 MeV from one of the closest CCSNe discovered since the Fermi mission began, the Type II SN 2023ixf. We used standard likelihood-based analysis to look for \u03b3-ray emission on different timescales starting from T0 up to a month after the event. In a complementary manner, we employed a photon-counting algorithm to investigate photon clustering from the source position. We do not detect a significant \u03b3-ray signal from SN 2023ixf. Assuming (1) a simple, isotropic density profile for the CSM from a progenitor\u2019s mass-loss rate consistent with optical observations and (2) standard proton DSA up to 1 PeV, our observations imply a CR acceleration efficiency of 1% or less. This result is in stark tension with the standard SN paradigm for the origin of Galactic CRs, which necessitates a 10% efficiency for such a conversion process. As a first approximation, this tension can seemingly be alleviated assuming an inhomogeneous environment surrounding the progenitor. Therefore, a more sophisticated model is required for both the shock and the CSM \u2014one that is consistent with all multi-wavelength observations. In essence, \u03b3-ray observations are, for the first time, offering the opportunity to effectually constrain CR acceleration during the very early expansion of the shock produced by a CCSN event. To this end, we produced a comprehensive set of limits at high energies for the modelling of SN 2023ixf. Present and future Fermi-LAT observations therefore provide a unique opportunity to establish whether or not SNe are indeed able to accelerate the bulk of CRs at early times up to the required energies. Acknowledgements. The Fermi LAT Collaboration acknowledges generous ongoing support from a number of agencies and institutes that have supported both the development and the operation of the LAT as well as scientific data analysis. These include the National Aeronautics and Space Administration and the Department of Energy in the United States, the Commissariat \u00e0 l\u2019Energie Atomique and the Centre National de la Recherche Scientifique / Institut National de Physique Nucl\u00e9aire et de Physique des Particules in France, the Agenzia Spaziale Italiana and the Istituto Nazionale di Fisica Nucleare in Italy, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), High Energy Accelerator Research Organization (KEK) and Japan Aerospace Exploration Agency (JAXA) in Japan, and the K. A. Wallenberg Foundation, the Swedish Research Council and the Swedish National Space Board in Sweden. Additional support for science analysis during the operations phase from the following agencies is also gratefully acknowledged: the Istituto Nazionale di Astrofisica in Italy and the Centre National d\u2019Etudes Spatiales in France. This work performed in part under DOE Contract DE-AC02-76SF00515. G.P. acknowledges support by ICSC \u2013 Centro Nazionale di Ricerca in High Performance Computing, Big Data and Quantum Computing, funded by European Union \u2013 NextGenerationEU. ReArticle number, page 10 of 13 G. Mart\u00ed-Devesa et al.: Early-time constraints on cosmic-ray acceleration in SN 2023ixf search at the Naval Research Laboratory is supported by NASA DPR S-15633-Y. This work made use of Astropy:4 a community-developed core Python package and an ecosystem of tools and resources for astronomy (Astropy Collaboration et al. 2013, 2018, 2022), as well as Numpy5 (Harris et al. 2020) and Matplotlib6 (Hunter 2007). We are also thankful to the anonymous referee and our colleagues from the Fermi LAT Collaboration Philippe Bruel, Melissa Pesce-Rollins, Anita Reimer, Olaf Reimer, and David J. Thompson for their comments and suggestions on this work."
+ }
+ ]
+}
\ No newline at end of file