diff --git "a/intro_28K/test_introduction_long_2405.03894v1.json" "b/intro_28K/test_introduction_long_2405.03894v1.json" new file mode 100644--- /dev/null +++ "b/intro_28K/test_introduction_long_2405.03894v1.json" @@ -0,0 +1,105 @@ +{ + "url": "http://arxiv.org/abs/2405.03894v1", + "title": "MVDiff: Scalable and Flexible Multi-View Diffusion for 3D Object Reconstruction from Single-View", + "abstract": "Generating consistent multiple views for 3D reconstruction tasks is still a\nchallenge to existing image-to-3D diffusion models. Generally, incorporating 3D\nrepresentations into diffusion model decrease the model's speed as well as\ngeneralizability and quality. This paper proposes a general framework to\ngenerate consistent multi-view images from single image or leveraging scene\nrepresentation transformer and view-conditioned diffusion model. In the model,\nwe introduce epipolar geometry constraints and multi-view attention to enforce\n3D consistency. From as few as one image input, our model is able to generate\n3D meshes surpassing baselines methods in evaluation metrics, including PSNR,\nSSIM and LPIPS.", + "authors": "Emmanuelle Bourigault, Pauline Bourigault", + "published": "2024-05-06", + "updated": "2024-05-06", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Consistent and high-quality novel view synthesis of real- world objects from a single input image is a remaining chal- lenge in computer vision. There is a myriad of applications in virtual reality, augmented reality, robotic navigation, con- tent creation, and filmmaking. Recent advances in the field of deep learning such as diffusion-based models [2, 13, 22, 36, 37] significantly improved mesh generation by denois- ing process from Gaussian noise. Text-to-image generation has shown great progress with the development of efficient approaches as generative adversarial networks [3, 11, 16], autoregressive transformers [9, 28, 39], and more recently, diffusion models [12, 14, 27, 32]. DALL-E 2 [27] and Im- agen [32] are such models capable of generating of photo- realistic images with large-scale diffusion models. Latent diffusion models [31] apply the diffusion process in the la- tent space, enabling for faster image synthesis. Although, image-to-3D generation has shown impres- sive results, there is still room for improvement in terms of consistency, rendering and efficiency. Generating 3D rep- resentations from single view is a difficult task. It requires extensive knowledge of the 3D world. Although diffusion models have achieved impressive performance, they require expensive per-scene optimization. Zero123 [18] proposes a diffusion model conditioned on view features and camera parameters trained on persepec- tive images [6]. However, the main drawback is the lack of multiview consistency in the generation process imped- ing high-quality 3D shape reconstruction with good cam- era control. SyncDreamer [19] proposes a 3D feature vol- ume into the Zero123 [18] backbone to improve the mul- tiview consistency. However, the volume conditioning sig- nificantly reduces the speed of generation and it overfits to some viewpoints, with 3D shapes displaying distortions. In this paper, we present MVDiff, a multiview diffusion model using epipolar geometry and transformers to gener- ate consistent target views. The main idea is to incorpo- rate epipolar geometry constraints in the model via self- attention and multi-view attention in the UNet to learn the geometry correspondence. We first need to define a scene transformation transformer (SRT) to learn an implicit 3D representation given a set of input views. Then, given an input view and its relative camera pose, we use a view- conditioned diffusion model to estimate the conditional dis- tribution of the target view. We show that this framework presents dual improve- ments compared to existing baselines in improving the 3D reconstruction from generated multi-view images and in terms of generalization capability. In summary, the paper presents a multi-view generation framework from single image that is transferable to various datasets requiring little amount of changes. We show high performance on the GSO dataset for 3D mesh generation. The model is able to extrapolate one view image of a 3D arXiv:2405.03894v1 [cs.CV] 6 May 2024 object to 360-view with high fidelity. Despite being trained on one dataset of natural objects, it can create diverse and realistic meshes. We summarise our contributions as fol- lows: \u2022 Implicit 3D representation learning with geometrical guidance \u2022 Multi-view self-attention to reinforce view consistency \u2022 Scalable and flexible framework", + "main_content": "2.1. Diffusion for 3D Generation Recently, the field of 3D generation has demonstrated rapid progress with the use of diffusion models. Several studies showed remarkable performance by training models from scratch on large datasets to generate point clouds [21, 24], meshes [10, 20] or neural radiance fields (NeRFs) at inference. Nevertheless, these models lack generalizability as they are trained on specific categories of natural objects. DreamFusion [26] explored leveraging 2D priors to guide 3D generation. Inspired by DreamFusion, several studies adopted a similar pipeline using distillation of a pretrained 2D text-to-image generation model for generating 3D shapes [1, 4, 5, 23, 43]. The per-scene optimisation process typically lacks in efficiency with times ranging from minutes to hours to generate single scenes. Recently, 2D diffusion models for multi-view synthesis from single view have raised interest for their fast 3D shape generation with appealing visuals [17, 18, 34]. However, they generally do not consider consistency of multi-view in the network design. Zero123 proposes relative viewpoint as conditioning in 2D diffusion models, in order to generate novel views from a single image [18]. However, this work does not consider other views in the learning process and this causes inconsistencies for complex shapes. One2-3-45 [17] decodes signed distance functions (SDF) [25] for 3D shape generation given multi-view images from Zero123 [18], but the 3D reconstruction is not smooth and artifacts are present. More recently, SyncDreamer [19] suggests a 3D global feature volume, in order to tackle inconsistencies in multiview generation. 3D volumes are used with depth-wise attention for maintaining multi-view consistency. The heavy 3D global modeling tend to reduce the speed of the generation and quality of the generated meshes. MVDream [35] on the other hand incorporates 3D self-attention with improved generalisability to unseen datasets. 2.2. Sparse-View Reconstruction Sparse-view image reconstruction [15, 45] is a challenging task where only a limited number of images, generally less than 10, are given. Traditional 3D reconstruction methods start by estimating camera poses, then as a second step perform dense reconstruction with multi-view stereo [38, 46] or NeRF [40]. Estimating camera poses in the context of sparse-view reconstruction is a challenging task as there is little or no overlap between views. [45] aimed at addressing this challenge by optimising camera poses and 3D shapes simultaneously. In the same line of research, PF-LRM [42] suggests a pose-free approach to tackle the uncertainty in camera poses. In our work, we learn the relative camera poses of the 3D representation implicitly via a transformer encoder-decoder network and a view-conditioned diffusion model capable of generating consistent multi-view images directly. We then employ a reconstruction system Neus [41] to recover a mesh. 3. Methodology 3.1. Multi-view Conditional Diffusion Model The rationale behind multi-view conditioning in diffusion models is to infer precisely the 3D shape of an object with the constraint that regions of the 3D object are unobserved. Direct 3D predictions for sequential targets as in Zero123 [18] might lead to implausible novel views. To control the uncertainty in novel view synthesis, we choose to enforce multi-view consistency during training. Given an input image or sparse-view input images of a 3D object, denoted as xI, with known camera parameters \u03c0I, and target camera parameters \u03c0T, our aim is to synthesize novel views that recover the geometry of the object. Our framework can be broken down into two parts: (i) first a scene representation transformer (SRT) [33] that learns the latent 3D representation given a single or few input views, and (ii) second a view-conditioned diffusion model to generate novel views. 3.2. Novel View Synthesis via Epipolar Geometry To perform novel view synthesis, we employ a scene representation transformer (SRT) [33]. In the work of [33], a transformer encoder-decoder architecture learns an implicit 3D latent representation given a set of images with camera poses (xI, \u03c0I). First, a CNN extracts features from xI and feeds them as tokens to the transformer encoder fE. The transformer encoder then outputs a set-latent scene representation z via self-attention. For novel view rendering, the decoder transformer of SRT queries the pixel color via cross-attention between the ray associated to that pixel r and the set-latent scene representation z. The aim is to minimize the pixel-level reconstruction loss in Eq. (1), \\lab e l {e l {e q : rec_l o s s} \\ m a t h c h c al {L}_{\\mathrm {recon}} =\\sum _{\\mathbf {r} \\in \\mathcal {R}}\\left \\|C(\\mathbf {r})-\\hat {C}(\\mathbf {r})\\right \\|_2^2, (1) Figure 1. Pipeline of MVDiff. From a single input or few input images, the transformer encoder translates the image(s) into latent scene representations, implicitely capturing 3D information. The intermediate outputs from the scene representation transformer are used as input by the view-conditioned latent diffusion UNet, generating multi-view consistent images from varying viewpoints. where C(r) is the ground truth color of the ray and R is the set of rays sampled from target views. We aim to leverage cross-interaction between images through relative camera poses using epipolar geometrical constraints. For each pixel in a given view i, we compute the epipolar line and the epipolar distance for all pixels in view j to build a weighted affinity matrix A\u2032 i,j = Ai,j+Wi,j where Wi,j is the weighted map obtained from the inverse epipolar distance. View-Conditioned Latent Diffusion. The outputs from SRT do not recover fine details with simple pixel-level reconstruction loss. We employ a view-conditioned diffusion model LDM from [29] to estimate the conditional distribution of the target view given the source view and the relative camera pose: p (xT | \u03c0T, xI, \u03c0I). First, the SRT predicts a low-resolution 32 \u00d7 32 latent image \u02dc xT based on the target view \u03c0T for computationally efficiency. The latent image from SRT is concatenated with the noisy image y and fed into the latent diffusion UNet E\u03b8. In addition, we condition E\u03b8 on the latent scene representation z via cross-attention layers (see Fig. 1). The generated images \u02c6 \u03f5t can be denoted as \\ h at {\\ b old sy mbol {\\mathcal {E}}_t} &= \\boldsymbol {\\mathcal {E}}_\\theta (\\boldsymbol {y}, \\tilde {\\boldsymbol {x}}_{\\mathrm {\\textit {I}}}, \\boldsymbol {z}, t), (2) where t is the timestep. We optimize a simplified variational lower bound, that is \\ma t h c al { L}_{\\ m ath rm {VLD M}}=\\mathbb {E}\\left [\\left \\| \\boldsymbol {\\mathcal {E}}_t \\boldsymbol {\\mathcal {E}}_\\theta (\\boldsymbol {y}, \\tilde {\\boldsymbol {x}}_{\\mathrm {\\textit {T}}}, \\boldsymbol {z}, t) \\right \\|^2\\right ]. (3) Multi-View Attention. As previously stated, in Zero123 [18], multiple images are generated in sequence from a given input view based on camera parameters. This approach can introduce inconsistencies between generated views. To address this issue, we apply modifications to the UNet in order to feed multi-view images. This way, we can predict simultaneously multiple novel views. We employ self-attention block to ensure consistency for different viewpoints. 4. Experiments This section presents the novel view synthesis experiments in Sec. 4.1, and the 3D generation experiments in Sec. 4.2. We present ablation experiments in Sec. 4.3 and ethical considerations in Sec. 4.4. Training Data. For training our model for novel view synthesis, we use 800k 3D object models from Objaverse [6]. For a fair comparison with other 3D diffusion baselines, we use the same training dataset. Input condition views are chosen in a similar way as Zero123 [18]. An azimuth angle is randomly chosen from one of the eight discrete angles of the output cameras. The elevation angle is randomly selected in the range [\u221210\u25e6, 45\u25e6]. For data quality purposes, we discard empty rendered images. This represents about one per cent of the training data. 3D objects are centered and we apply uniform scaling in the range [-1,1] so that dimensions matches. Input images to our pipeline are RGB images 256x256. Test Data. We use the Google Scanned Object (GSO) [8] as our testing dataset, and use the same 30 objects as SyncDreamer [19]. There are 16 images per 3D object, with a fixed elevation of 30\u25e6and every 22.5\u25e6for azimuth. Implementation Details. Our model is trained using the AdamW optimiser [24] with a learning rate of 10\u22124 and weight decay of 0.01. We reduce the learning rate to 10\u22125 for a total of 100k training steps. For our training batches, we use 3 input views and 3 target views randomly sampled with replacement from 12 views for each object, with a batch size of 356. We train our model for 6 days on 4 A6000 (48GB) GPUs. Evaluation Metrics. For novel view synthesis, we report the PSNR, SSIM [44], and LPIPS [47]. For 3D reconstruction from single-view or few views, we use the Chamfer Distances (CD) and 3D IoU between the ground-truth and reconstructed volumes. 4.1. Novel View Synthesis We show in Tab. 1 the performance of MVDiff compared to baselines for novel view synthesis on an unseen dataset [8]. Qualitative results are shown in Fig. 2. Our model surpasses baseline Zero-123XL by a margin and benefits from additional views. Given the probabilistic nature of the model, it is able to generate diverse and realistic shapes given a single view (see Fig. 3). Training Sample # Ref. Views GSO NeRF Synthetic PSNR\u2191SSIM\u2191LPIPS\u2193Runtime\u2193PSNR\u2191SSIM\u2191LPIPS\u2193Runtime\u2193 Zero123 800K 1 18.51 0.856 0.127 7s 12.13 0.601 0.421 7s Zero123-XL 10M 1 18.93 0.856 0.124 8s 12.61 0.620 0.381 8s MVDiff 800k 1 20.24 0.884 0.095 9s 12.66 0.638 0.342 9s MVDiff 800k 2 22.91 0.908 0.064 9s 13.42 0.685 0.321 10s MVDiff 800k 3 24.09 0.918 0.052 10s 13.58 0.741 0.301 11s MVDiff 800k 5 25.09 0.927 0.043 11s 14.55 0.833 0.288 12s MVDiff 800k 10 25.90 0.935 0.036 12s 14.51 0.657 0.215 13s Table 1. Novel view synthesis performance on GSO and NeRF Synthetic datasets. MVDiff outperforms Zero-123XL with significantly less training data. Additionally, MVDiff performance exhibits further improvement with the inclusion of more reference views. 4.2. 3D Generation We showed in Sec. 4.1 that our model can generate multiple consistent novel views. In this section, we perform single and few-images 3D generation on the GSO dataset. We generate 16 views with azimuths uniformly distributed in the range 0\u25e6to 360\u25e6. For a fixed elevation angle of 30\u25e6, SyncDreamer may fail to recover the shape of 3D objects at the top and bottom since the camera angle does not cover those regions. Therefore, we also use different elevation angles from \u221210\u25e6to 40\u25e6. Then, we adopt NeuS [40] for 3D reconstruction. The foreground masks of the generated images are initially predicted using CarveKit. It takes around 3 minutes to reconstruct a textured mesh. We compare our 3D recontructions with SoTA 3D generation models, including One-2-3-45 [17] for decoding an SDF using multiple views predicted from Zero123, and SyncDreamer [19] for fitting an SDF using NeuS [40] from 16 consistent fixed generated views. Given two or more reference views, MVDiff outperforms all other baselines (see Tab. 2). MVDiff generates meshes that are visually consistent and resembles the ground-truth (see Fig. 4). # Input Views Chamfer Dist. \u2193 Volume IoU \u2191 Point-E 1 0.0561 0.2034 Shape-E 1 0.0681 0.2467 One2345 1 0.0759 0.2969 LGM 1 0.0524 0.3851 SyncDreamer 1 0.0493 0.4581 MVDiff 1 0.0411 0.4357 MVDiff 2 0.0341 0.5562 MVDiff 3 0.0264 0.5894 MVDiff 5 0.0252 0.6635 MVDiff 10 0.0254 0.6721 Table 2. 3D reconstruction performance on GSO dataset. MVDiff outperforms other image-to-3D baselines in generating high-quality 3D objects, with improved performance for multiple input views. PSNR\u2191 SSIM\u2191 LPIPS\u2193 MVDiff 20.24 0.884 0.095 w/o epipolar att. 19.14 0.864 0.118 w/o multi-view att. 19.92 0.871 0.113 Table 3. Effect of Self-Attention Mechanisms. We report PSNR, SSIM [44], and LPIPS [47] for novel view synthesis from single view on GSO dataset. Results show that epipolar attention and multi-view attention lead to superior performance. 4.3. Ablation Study Multi-View Consistency. The generated images may not always plausible and we need to generate multiple instances with different seeds and select a desirable instance for 3D reconstruction based on higher overall PSNR, SSIM and LPIPS for the view generated. Experiments show that we need 5 generations to obtain optimal reconstruction. Effect of Epipolar and Mult-View Attention. We evaluate the benefits of epipolar attention and multi-view attention on novel view synthesis performing ablation experiments on those components. In particular, we observe a significant drop in performance metrics when removing epipolar attention suggesting that the model is effectively able to implicitely learn 3D object geometry by enforcing geometrical guidance (see Tab. 3). Weight Initialisation. An alternative to initialising weights trained from Zero123 on view-dependent objects [7] is to use weights from Stable Diffusion [30]. We compare the performance of our model initializing weights from Stable Diffusion v2 [30] with a drop in performance of -2.58 PSNR compared to Zero123 [18] weight initialisation. This shows that initializing from Stable Diffusion v2 leads to poorer performance on the novel view task and worse generalisability. 4.4. Risks and Ethical Considerations There are several promising applications of synthetic data, notably in medicine. Synthetic data could make significant Figure 2. Zero-Shot Novel View Synthesis on GSO. MVDiff outperforms Zero123-XL for single view generation with greater camera control and generation quality. As more views are added, MVDiff resembles the ground-truth with fine details being captured such as elephant tail and turtle shell design. Input \u2190\u2212\u2212\u2212\u2212\u2212Generated \u2212\u2212\u2212\u2212\u2212\u2192 GT Figure 3. Diversity of Novel View Diffusion with MVDiff on NeRF-Synthetic Dataset. We show nearby views (top and bottom row) displaying good consistency, while more distant views (middle) are more diverse but still realistic. improvement in surgery planning and tailored patient diagnosis leveraging 3D information and its assets of quantitative parameters. Nevertheless, there are ethical considerations associated with the use of synthetic data in medicine. We should ensure the synthetic data is anonymised such that no particular features of the synthetic meshes could link back to a specific patient. In that light, there are transformations that can be applied to the meshes. We should also make sure that the synthetic data is not used in a way it could harm or be detrimental. Further validation on different cohorts of people is required before using these synthetic data in clinical settings. Despite important ethical considerations we shed light on, we believe these 3D representations of organs could be of great use, on hand for research purposes to run largescale statistical analysis on different cohorts and highlight associations with patient metadata. These cost effective synthetic data could be beneficial to improve the visualisations of bones and organs and be deployed widely. 4.5. Limitations A limitation of this work lies in its computational time and resource requirements. Despite advances in sampling approaches, our model still requires more than 50 steps to generate high-quality images. This is a limit of all diffusion based generation models. Moreover, the reconstructed meshes may not always be plausible. To increase the quality, we may need to use a larger object dataset like Objaverse-XL[7] and manually curate the dataset to filter out uncommon shapes such as point clouds, textureless 3D models and more complex scene representation. Figure 4. 3D reconstruction from single-view on GSO dataset. MVDiff produces consistent novel views and improves the 3D geometry compared to baselines. One-2-3-45 and SyncDreamer tend to generate overly-smoothed and incomplete 3D objects, in particular the sofa. 5. Conclusion In our work, we aimed to address the problem of inconsistencies in multi-view synthesis from single view. We specifically apply epipolar attention mechanisms as well as multiview attention to aggregate features from multiple views. We propose a simple and flexible framework capable of generating high-quality multi-view images conditioned on an arbitrary number of images. 5.1. Future Work Combining with graphics. In this study, we show that we can generate view consistent 3D objects by learning geometrical correspondences between views during training. We modified the latent diffusion U-Net model to feed multi view in order to generate consistent multi view for 3D reconstruction. Future work can explore utilising knowledge about lighting, and texture to generate more diverse range of 3D shapes with varying lighting and texture. Acknowledgements E.B is supported by the Centre for Doctoral Training in Sustainable Approaches to Biomedical Science: Responsible and Reproducible Research (SABS: R3), University of Oxford (EP/S024093/1). P.B. is supported by the UKRI CDT in AI for Healthcare http://ai4health.io (Grant No. P/S023283/1).", + "additional_info": [ + { + "url": "http://arxiv.org/abs/2404.14240v1", + "title": "Collaborative Filtering Based on Diffusion Models: Unveiling the Potential of High-Order Connectivity", + "abstract": "A recent study has shown that diffusion models are well-suited for modeling\nthe generative process of user-item interactions in recommender systems due to\ntheir denoising nature. However, existing diffusion model-based recommender\nsystems do not explicitly leverage high-order connectivities that contain\ncrucial collaborative signals for accurate recommendations. Addressing this\ngap, we propose CF-Diff, a new diffusion model-based collaborative filtering\n(CF) method, which is capable of making full use of collaborative signals along\nwith multi-hop neighbors. Specifically, the forward-diffusion process adds\nrandom noise to user-item interactions, while the reverse-denoising process\naccommodates our own learning model, named cross-attention-guided multi-hop\nautoencoder (CAM-AE), to gradually recover the original user-item interactions.\nCAM-AE consists of two core modules: 1) the attention-aided AE module,\nresponsible for precisely learning latent representations of user-item\ninteractions while preserving the model's complexity at manageable levels, and\n2) the multi-hop cross-attention module, which judiciously harnesses high-order\nconnectivity information to capture enhanced collaborative signals. Through\ncomprehensive experiments on three real-world datasets, we demonstrate that\nCF-Diff is (a) Superior: outperforming benchmark recommendation methods,\nachieving remarkable gains up to 7.29% compared to the best competitor, (b)\nTheoretically-validated: reducing computations while ensuring that the\nembeddings generated by our model closely approximate those from the original\ncross-attention, and (c) Scalable: proving the computational efficiency that\nscales linearly with the number of users or items.", + "authors": "Yu Hou, Jin-Duk Park, Won-Yong Shin", + "published": "2024-04-22", + "updated": "2024-04-22", + "primary_cat": "cs.IR", + "cats": [ + "cs.IR", + "cs.AI", + "cs.IT", + "cs.LG", + "cs.SI", + "math.IT" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Diffusion models [15, 36] have become one of recent emerging topics thanks to their state-of-the-art performance in various do- mains, including computer vision [10, 15, 31], natural language processing [2, 33], and multi-modal deep learning [3, 32]. Diffusion models, categorized as deep generative models, gradually perturb the input data by adding random noise in the forward-diffusion process and then recover the original input data by learning in the reverse-denoising process, step by step. Due to their denoising nature, diffusion models align well with recommender systems, which can be viewed as a denoising process because user\u2013item historical interactions are naturally noisy and diffusion models can learn to recover the original interactions based on corrupted ones [20, 43, 47]. Recent efforts have verified the effectiveness of diffu- sion models for sequential recommendations [21, 24, 48, 50], where the process of modeling sequential item recommendations mirrors the step-wise process of diffusion models. However, the application of diffusion models to recommender systems has yet been largely underexplored. On one hand, one of the dominant techniques used in recom- mender systems is collaborative filtering (CF), where attention has been paid to model-based approaches including matrix factorization (MF) [19, 49] and deep learning [13, 14, 22, 29, 44, 51] (e.g., graph neural networks (GNNs) [13, 29, 44]). CF-based recommender sys- tems have achieved great success in many real-world applications, due to their simplicity, efficiency, and effectiveness, while aiming to learn multi-hop relationships among users and items. For exam- ple, the message passing mechanism in GNNs, being increasingly used in the tasks of recommendation, captures collaborative signals in high-order connectivities by aggregating features of neighbors. Figure 1a illustrates the multi-hop neighbors used for CF with an example involving two users. It is seen that, although User 1 and User 3 have different direct interactions, they share similar 2-hop (User 2) and 3-hop (Item 2, Item 5) neighbors, which implies that User 1 (resp. User 3) is highly likely to prefer Item 4 consumed by User 3 (resp. Item 1 and Item 3 consumed by User 1). On the other hand, unlike the existing CF techniques using MF and GNNs, it is not straightforward to grasp how to exploit such high-order connectivity information from a diffusion model\u2019s per- spective, as shown in Figure 1b. Recent studies on diffusion model- arXiv:2404.14240v1 [cs.IR] 22 Apr 2024 SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Yu Hou, Jin-Duk Park, & Won-Yong Shin (a) Multi-hop neighbors (b) New challenges Figure 1: Illustration showing (a) neighbors of User 1 and User 3 up to 3 hops and (b) how such high-order connectivity information can be potentially encoded and infused into the diffusion model-based learning system. Here, {u0, \u00b7 \u00b7 \u00b7 , u\ud835\udc47} are the encoded information of direct user\u2013item interactions at each step, and u\u2032 is the encoded high-order connectivity information. based recommender systems [21, 24, 40, 43, 48, 50] often overlooked the exploration of multi-hop similarity/proximity among nodes, albeit the core mechanism of CF in achieving satisfactory perfor- mance. In this context, even with recent attempts to develop recom- mender systems via diffusion models [21, 24, 40, 43, 48, 50], a natural question arising is: \u201chow can high-order connectivity information be efficiently and effectively incorporated into recommender systems based on diffusion models?\u201d. To answer this question, we would like to outline the following two design challenges: \u2022 C1. how to ensure the complexity of the learning model (to be designed) at an acceptable level even when including high-order connectivity information; \u2022 C2. how to judiciously link the high-order connectivity in- formation with the direct user\u2013item interactions under a diffusion-model framework. It is worth noting that leveraging direct user\u2013item interactions (i.e., direct neighbors) of each individual is rather straightforward so that diffusion models can learn the distribution of these interactions (see, e.g., [43] for such an attempt). However, the exploration of high- order collaborative signals among users and items inevitably poses technical challenges. First, the infusion of high-order connectivity information may lead to an increased memory and computational burden, as training diffusion models is known to be quite expensive in terms of space and time [15, 36]. This complexity issue will be severe with an increasing number of users and items. Second, injecting high-order connectivities in an explicit manner into a learning system within a diffusion-model framework is technically abstruse. As shown in Figure 1b, while direct user\u2013item interactions can be readily fed to the diffusion model-based learning system, the accommodation of high-order collaborative signals necessitates a complex and challenging integration task. To address these aforementioned challenges, we make the first attempt towards developing a lightweight CF method based on diffusion models, named CF-Diff. (Idea 1) The proposed CF-Diff method naturally involves two distinct processes, the forward-diffusion process and the reverse- denoising process. The forward-diffusion process gradually adds random noise to the individual user\u2013item interactions, while the reverse-denoising process aims to gradually recover these interac- tions by infusing high-order connectivities, achieved through our proposed learning model to be specified later. (Idea 2) As one of our main contributions, we next design an effi- cient yet effective learning model for the reverse-denoising process, dubbed cross-attention-guided multi-hop autoencoder (CAM-AE), which is capable of infusing and learning high-order connectivities without incurring additional computational costs and scalability issues. Our CAM-AE model consist of three primary parts: a high- order connectivity encoder, an attention-aided AE module, and a multi-hop cross-attention module. First, we initially pre-process the user\u2013item interactions in the sense of extracting and encoding \u2018per- user\u2019 connectivity information from pre-defined multi-hop neigh- boring nodes. Next, we incorporate the attention-aided AE module into CAM-AE to precisely learn latent representations of the noisy user\u2013item interactions while preserving the model\u2019s complexity at manageable levels by controlling the dimension of latent represen- tations (solving the challenge C1). Lastly, inspired by conditional diffusion models [31], we incorporate the multi-hop cross-attention module into CAM-AE since high-order connectivity information can be seen as a condition for denoising the original user\u2013item in- teractions. This module takes advantages of the conditional nature of these connectivities while connecting with the direct user\u2013item interactions in the reverse-denoising process, thereby enriching the collaborative signal (solving the challenge C2). Our main contributions are summarized as follows: \u2022 Novel methodology: We propose CF-Diff, a novel diffusion model-based CF method featuring our specially designed learning model, CAM-AE. This model is composed of 1) the encoder of high-order connectivity information, 2) the attention-aided AE module primarily designed for preserv- ing the model\u2019s complexity at manageable levels, and 3) the multi-hop cross-attention module for accommodating high- order connectivity information. \u2022 Extensive evaluations: Through comprehensive experi- mental evaluations on three real-world benchmark datasets, including two large-scale datasets, we demonstrate (a) the superiority of CF-Diff, showing substantial gains up to 7.29% in terms of NDCG@10 compared to the best competitor, (b) the effectiveness of core components in CAM-AE, and (c) the impact of multi-hop neighbors in CF-Diff. \u2022 Theoretical findings: We theoretically prove that (a) our learning model\u2019s embeddings closely approximate those from the (computationally more expensive) original cross- attention, and (b) the model\u2019s computational complexity scales linearly with the maximum between the number of users and the number of items. This is further supported by empirical verifications, confirming the scalability of CF-Diff.", + "main_content": "Let \ud835\udc62\u2208U and \ud835\udc56\u2208I denote a user and an item, respectively, where U and I denote the sets of all users and all items, respectively. Historical interactions of a user \ud835\udc62\u2208U with items are represented as a binary vector u \u2208{0, 1}|I| whose \ud835\udc56-th entry is 1 if there exists Collaborative Filtering Based on Diffusion Models: Unveiling the Potential of High-Order Connectivity SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Figure 2: The schematic overview of CF-Diff when both 2hop and 3-hop neighboring nodes are taken into account. implicit feedback (such as a click or a view) between user \ud835\udc62and item \ud835\udc56\u2208I, and 0 otherwise.1 2.2 Overview of CF-Diff We describe the methodology of CF-Diff, a new diffusion modelbased CF method that is capable of reflecting high-order connectivity information, revealing co-preference patterns between users and items, for accurate recommendations. We recall that recent recommendation methods using diffusion models [21, 24, 40, 43, 48, 50] focus primarily on leveraging only the direct user\u2013item interactions and overlook the collaborative signal in high-order connectivities during training. Our study aims to fill this gap by infusing highorder connectivity information into the proposed method, which poses two main design challenges that we mentioned earlier: preserving the learning model\u2019s complexity at an acceptable level (C1) and learning complex high-order connectivities at a fine-grained level (C2). To tackle these challenges, as a core module of CF-Diff, we develop an innovative learning model, CAM-AE. In the CAM-AE model, we propose to use a multi-hop cross-attention mechanism to infuse multi-hop neighborhood information from the target user during training, thereby enriching the collaborative signal, which however causes additional computational costs. To counter this, we next employ an attention-aided AE module, enabling to preserving the model\u2019s complexity at manageable levels. Note that diffusion models can be viewed as partitioning the denoising process of an AE into a series of finer sub-processes [9, 16], which can capture more delicate recovery details. Since CFDiff is built upon such diffusion models, it naturally involves two distinct processes, namely the forward-diffusion process and the reverse-denoising process, achieved with a tailored neural network architecture in CAM-AE. The schematic overview of the CF-Diff method is illustrated in Figure 2, and each process in CF-Diff is summarized as follows. (1) Forward-diffusion process (Section 2.3): The forward diffusion, aligning with standard diffusion models, gradually adds Gaussian noise to the user\u2013item historical interactions, as shown in the upper left part of Figure 2. (2) Reverse-denoising process (Section 2.4): We aim to gradually recover the original user\u2013item interactions from noisy ones. This is achieved by using the proposed learning model, CAM-AE (to be specified Section 3), which infuses high-order 1The unbolded \ud835\udc62represents a user, while the bolded u represents a certain user\u2019s interaction vector as utilized in the proposed method. connectivities to iteratively guide the reverse-denoising process. To bridge the historical one-hop interactions and multihop neighbors, our CAM-AE model integrates an attentionaided AE with a cross-attention architecture, progressively recovering user\u2013item interactions by leveraging high-order connectivity information (see the right part of Figure 2). 2.3 Forward-Diffusion Process We denote the initial state of a specific user \ud835\udc62\u2208U as u0 = u.2 In the forward-diffusion process, we gradually insert Gaussian noise in the initial user\u2013item interactions u0 over \ud835\udc47steps, producing a sequence of noisy samples u1, . . . , u\ud835\udc47, denoted as u1:\ud835\udc47(see Figure 2), which can be modeled as \ud835\udc5e(u1:\ud835\udc47|u0 ) = \ud835\udc47 \u00d6 \ud835\udc61=1 \ud835\udc5e(u\ud835\udc61|u\ud835\udc61\u22121 ) , (1) where \ud835\udc5e(u\ud835\udc61|u\ud835\udc61\u22121 ) = N \u0010 u\ud835\udc61; \u221a\ufe01 1 \u2212\ud835\udefd\ud835\udc61u\ud835\udc61\u22121, \ud835\udefd\ud835\udc61I \u0011 (2) represents the transition of adding noise from states u\ud835\udc61\u22121 to u\ud835\udc61via a Gaussian distribution [15, 36]. Here, \ud835\udc61\u2208{1, . . . ,\ud835\udc47} refers to the diffusion step; N denotes the Gaussian distribution; and \ud835\udefd\ud835\udc61\u2208(0, 1) controls the Gaussian noise scales added at each time step \ud835\udc61. To generate the noisy sample u\ud835\udc61from \ud835\udc5e(u\ud835\udc61|u\ud835\udc61\u22121 ), we employ the reparameterization trick [18], expressed as u\ud835\udc61= \u221a\ufe01 1 \u2212\ud835\udefd\ud835\udc61u\ud835\udc61\u22121 + \u221a\ufe01 \ud835\udefd\ud835\udc61\ud835\udf00\ud835\udc61\u22121, where \ud835\udf00\ud835\udc61\u22121 \u223cN (0, I). This process is iteratively applied until we obtain the final sample u\ud835\udc47at time step \ud835\udc47. It is noteworthy that, in contrast to existing diffusion models, our approach focuses on adding noise to user\u2013item interactions from a single user\u2019s perspective, which originates from the nature of the denoising process in variational AE (VAE)-based CF [20]. 2.4 Reverse-Denoising Process In the reverse-denoising process, the estimation of the distribution \ud835\udc5e(u\ud835\udc61\u22121 |u\ud835\udc61) is technically not easy as it requires using the entire dataset. Therefore, a neural network model \ud835\udc5d\ud835\udf03is employed to approximate such conditional probabilities [15]. Starting from u\ud835\udc47, the reverse-denoising process gradually recovers u\ud835\udc61\u22121 from u\ud835\udc61via the denoising transition step. However, only relying on user\u2013item interactions do not ensure the high-quality recovery for CF-based recommendations, as high-order connectivity information plays an important role in guaranteeing state-of-the-art performance of CF, as shown in Figure 1a. To address this, we integrate multi-hop neighbors of the target user \ud835\udc62(denoted as u\u2032) into our learning model, thereby enhancing recommendation accuracies. This differs from original diffusion models, which focus on denoising solely from noisy samples (i.e., \ud835\udc5d\ud835\udf03(u\ud835\udc61\u22121 |u\ud835\udc61) in [15]). In other words, our approach not only denoises from noisy samples but also enriches the denoising process by exploiting high-order connectivities. The denoising transition via the Gaussian distribution is formulated as follows [15, 31]: \ud835\udc5d\ud835\udf03(u0:\ud835\udc47) = \ud835\udc5d(u\ud835\udc47) \ud835\udc47 \u00d6 \ud835\udc61=1 \ud835\udc5d\ud835\udf03 \u0000u\ud835\udc61\u22121 \f \fu\ud835\udc61, u\u2032 \u0001 , (3) 2For notational convenience, since each user \ud835\udc62experiences the forward-diffusion and reverse-denoising processes independently, we do not use the user index in \ud835\udc62unless it causes any confusion. SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Yu Hou, Jin-Duk Park, & Won-Yong Shin where \ud835\udc5d\ud835\udf03 \u0000u\ud835\udc61\u22121 \f \fu\ud835\udc61, u\u2032 \u0001 = N \u0000u\ud835\udc61\u22121; \ud835\udf41\ud835\udf03 \u0000u\ud835\udc61, u\u2032,\ud835\udc61\u0001 , \ud835\udeba\ud835\udf03 \u0000u\ud835\udc61, u\u2032,\ud835\udc61\u0001\u0001 . (4) Here, \ud835\udf41\ud835\udf03(u\ud835\udc61, u\u2032,\ud835\udc61) and \ud835\udeba\ud835\udf03(u\ud835\udc61, u\u2032,\ud835\udc61) are the mean and covariance of the Gaussian distribution predicted by the neural network with learnable parameters \ud835\udf03. Besides, to maintain training stability and simplify calculations, we ignore learning of \ud835\udeba\ud835\udf03(u\ud835\udc61, u\u2032,\ud835\udc61) in Eq. (4) and set \ud835\udeba\ud835\udf03(u\ud835\udc61, u\u2032,\ud835\udc61) = \ud835\udefd\ud835\udc61I by following [15]. After leaning the mean \ud835\udf41\ud835\udf03(u\ud835\udc61, u\u2032,\ud835\udc61) in the model, we can obtain the recovered u\ud835\udc61\u22121 by sampling from \ud835\udc5d\ud835\udf03(u\ud835\udc61\u22121 |u\ud835\udc61, u\u2032 ). This process is iteratively applied until we obtain an estimate of the original sample u0. The neural network architecture of CAM-AE is designed in the sense of judiciously infusing high-order connectivities in the reverse-denoising process. To this end, CAM-AE consists of two key components: 1) an attention-aided AE module precisely learns latent representations of the noisy user\u2013item interactions, helping preserve the complexity manageable (solving the challenge C1), and 2) a multi-hop cross-attention module, which accommodates highorder connectivity information to facilitate the reverse-denoising process, thus capturing the enriched collaborative signal (solving the challenge C2). 3 LEARNING MODEL: CAM-AE In this section, we elaborate on the proposed CAM-AE model, comprising an attention-aided AE module and a multi-hop crossattention module. After showing how to extract and encode multihop neighborhood information for a given bipartite graph, we describe implementation details of each module in CAM-AE. We then explain how to optimize our learning model. Finally, we provide analytical findings, which theoretically validate the efficiency of CAM-AE. 3.1 High-Order Connectivity Encoder To extract multi-hop neighbors of a given user, we may use a bipartite graph constructed by establishing edges based on all user\u2013item interactions. However, using such a bipartite graph will result in a huge memory and computational burden during training. To solve this practical issue, we pre-process the user\u2013item interactions in such a way of initially extracting multi-hop neighbors of a user. This extracted \u2018per-user\u2019 connectivity information is then made available in the reverse-denoising process to assist recovery of the original user\u2013item interactions (see Figure 2). Given a target user\u2019s historical interactions u, we explain how to explore multi-hop neighbors along paths within the user\u2013item bipartite graph. In our study, we encode high-order connectivity information (i.e., high-order collaborative signals) up to \ud835\udc3b-hop neighbors as in the following form: u\u2032 = h u(2), . . . , u(\ud835\udc3b)i , (5) where u(\u210e) = 1 \ud835\udc41\u210e\u22121,\u210e r \u0010 G (\ud835\udc62,\u210e) , c(\u210e)\u0011 (6) for \u210e= 2, \u00b7 \u00b7 \u00b7 , \ud835\udc3b.3 Here, r(\u00b7, \u00b7) is the vector-valued function returning a multi-hot encoded vector where one is assigned only to the 3If \u210eis even, then \f \fu(\u210e)\f \f = |U|. Otherwise, \f \fu(\u210e)\f \f = |I|. However, to tractably handle u\u2032, we can set the dimensionality of each u(\u210e) to max {|U| , |I|}. Figure 3: Extraction and encoding of 2-hop and 3-hop neighbors of the target user (User 1) as well as direct neighbors for a given bipartite graph. elements corresponding to \u210e-hop neighbors of user \ud835\udc62; G (\ud835\udc62,\u210e) indicates the set of \u210e-hop neighbors of user \ud835\udc62; c(\u210e) \u2208R| G(\ud835\udc62,\u210e)|\u00d71 is the integer vector, each of which represents the number of incoming links from (\u210e\u22121)-hop neighbors of user \ud835\udc62to each of \u210e-hop neighbors; and \ud835\udc41\u210e\u22121,\u210eis the total number of interactions between (\u210e\u22121)-hop and \u210e-hop neighbors of of user \ud835\udc62. Now, let us show an explicit form of encoded \u210e-hop neighborhood information u(\u210e) along with the following example. Example 1. Consider the target user (User 1) in the user\u2013item bipartite graph consisting of 3 users and 5 items, as illustrated in Figure 3. Here, it follows that u = [1 1 0 0 1]\ud835\udc47as User 1 has interacted with Item 1, Item 2, and Item 5. Since the 2-hop neighbors of User 1 are User 2, User 3 and User 3 has two incoming links, we have u(2) = [0 1 3 2 3]\ud835\udc47normalized to the total number of interactions at the second hop. Similarly, we obtain u(3) = [0 0 1 3 2 3 0]\ud835\udc47. 3.2 Attention-Aided AE Module VAE-based CF [20] shows great potential in capturing underlying patterns by encoding user\u2013item interactions into a latent space. Similarly, in the CAM-AE model, we would like to design lightweight encoders to project the user\u2013item interactions into a latent space, aiming to capture high-level patterns while keeping the computations manageable by controlling the latent dimension. This design principle enables us to solve the challenge C1. In CAM-AE, the attention-added AE module involves hop-specific encoders. As illustrated in Figure 2, an encoder E1 (\u00b7) is adopted to project user \ud835\udc62\u2019s noisy interactions u\ud835\udc61into a latent space, represented by the latent embedding z\ud835\udc61\u2208R \ud835\udc58\u00d71 with its dimensionality \ud835\udc58. Likewise, another hop-specific encoder E\u210e(\u00b7) generates embeddings for the encoded information of \u210e-hop neighbors of user \ud835\udc62, u(\u210e), yielding z(\u210e) \u2208R \ud835\udc58\u00d71. Similarly as in [42], these two encoders E1 (\u00b7) and E\u210e(\u00b7) are implemented as linear transformations, which are formally expressed as z\ud835\udc61= E1 (u\ud835\udc61) = E1u\ud835\udc61, (7) z(\u210e) = E\u210e \u0010 u(\u210e)\u0011 = E\u210eu(\u210e), (8) where E1 \u2208R \ud835\udc58\u00d7|I| and E\u210e\u2208R \ud835\udc58\u00d7|u(\u210e) | represents the transformation matrices. Figure 2 illustrates the case where the embeddings z(2) Collaborative Filtering Based on Diffusion Models: Unveiling the Potential of High-Order Connectivity SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA and z(3) of both 2-hop and 3-hop neighbors of a target user are generated.4 We can preserve the CAM-AE model\u2019s complexity at manageable levels through these linear transformations that reduce the dimension of latent representations. We turn to addressing a decoder D (\u00b7), which is adopted to recover the mean value of \ud835\udc5d\ud835\udf03(u\ud835\udc61\u22121 |u\ud835\udc61, u\u2032 ) using the embedding, denoted as \u00af z\ud835\udc61, as input that is returned by the multi-hop crossattention module (to be specified in Section 3.3), as depicted in Figure 2. The decoder is formulated as follows: \u02c6 \ud835\udf41\ud835\udf03= D (\u00af z\ud835\udc61) = D\u00af z\ud835\udc61, (9) where D \u2208R |I|\u00d7\ud835\udc58is the transformation matrix in the decoder. Then, u\ud835\udc61\u22121 can be sampled from \ud835\udc5d\ud835\udf03(u\ud835\udc61\u22121 |u\ud835\udc61, u\u2032 ) = N (u\ud835\udc61\u22121; \u02c6 \ud835\udf41\ud835\udf03, \ud835\udefd\ud835\udc61I). 3.3 Multi-Hop Cross-Attention Module The CAM-AE model is enlightened by conditional diffusion models [31], which achieved impressive success in various fields by using the cross-attention mechanism [39] to integrate additional conditions. In CAM-AE, high-order connectivity information in Eq. (5) can be regarded as a condition for denoising the original user\u2013item interactions u0, following the principle of conditional diffusion models [31]. In this study, to effectively infuse high-order connectivities into our learning model, we propose the multi-hop cross-attention module. This module judiciously harnesses the conditional nature of these connectivities while connecting with the direct user\u2013item interactions in the reverse-denoising process. This design principle is established to fundamentally solve the challenge C2. In the multi-hop cross-attention module, we start by expanding the dimension of z\ud835\udc61\u2208R \ud835\udc58\u00d71 and z(\u210e) \u2208R \ud835\udc58\u00d71 (i.e., the output embeddings of encoders E1 and E\u210e) to obtain v\ud835\udc61\u2208R \ud835\udc58\u00d7\ud835\udc51and q(\u210e) \u2208R \ud835\udc58\u00d7\ud835\udc51 for improving the expressiveness. This expansion can be implemented as v\ud835\udc61= z\ud835\udc61E\ud835\udc63and q(\u210e) = z(\u210e)E\ud835\udc5e, where E\ud835\udc63\u2208R1\u00d7\ud835\udc51and E\ud835\udc5e\u2208R1\u00d7\ud835\udc51are the transformation matrices with \ud835\udc51being the expended dimensionality. Then, the resulting embedding of \u210e-hop neighbors of user \ud835\udc62, q(\u210e), is integrated into v\ud835\udc61using the multi-hop cross-attention module: \ud835\udc34\ud835\udc61\ud835\udc61\ud835\udc52\ud835\udc5b\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5b\u210e \u0010 Q(\u210e), K\ud835\udc61, V\ud835\udc61 \u0011 := \ud835\udc60\ud835\udc5c\ud835\udc53\ud835\udc61\ud835\udc5a\ud835\udc4e\ud835\udc65 Q(\u210e)K\ud835\udc47 \ud835\udc61 \u221a \ud835\udc51 ! V\ud835\udc61, (10) where Q(\u210e) =q(\u210e)W\ud835\udc44 \ud835\udf03, K\ud835\udc61=v\ud835\udc61W\ud835\udc3e \ud835\udf03, and V\ud835\udc61=v\ud835\udc61W\ud835\udc49 \ud835\udf03; and\ud835\udc60\ud835\udc5c\ud835\udc53\ud835\udc61\ud835\udc5a\ud835\udc4e\ud835\udc65(\u00b7) is the softmax function. Here, n W\ud835\udc44 \ud835\udf03, W\ud835\udc3e \ud835\udf03, W\ud835\udc49 \ud835\udf03 o \u2208R \ud835\udc51\u00d7\ud835\udc51are trainable parameters. Figure 2 includes the multi-hop cross-attention module (see the light red blocks in the reverse-denoising process) when 2-hop and 3-hop neighbors of the target user are taken into account. Due to the fact that the aforementioned process is basically built upon linear transformations that lack the ability to capture the intrinsic data complexity, a per-hop forward operation \ud835\udc53\u210e(\u00b7) using non-linear transformations is applied to \ud835\udc34\ud835\udc61\ud835\udc61\ud835\udc52\ud835\udc5b\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5b\u210e[39]. We stack \ud835\udc41identical layers, each consisting of cross-attention and non-linear transformation, with the output from the last layer aggregated to 4Although the example in Figure 2 deals with up to 3-hop neighbors, it is straightforward to extend our module to the case of leveraging general \u210e-hop neighbors. form \u00af z\ud835\udc61\u2208R\ud835\udc58\u00d7\ud835\udc51, calculated as \u00af z\ud835\udc61= \ud835\udc3b \u2211\ufe01 \u210e=2 \ud835\udefc\u210e\ud835\udc53\u210e(\ud835\udc34\ud835\udc61\ud835\udc61\ud835\udc52\ud835\udc5b\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5b\u210e), (11) where \ud835\udc3b \u00cd \u210e=2 \ud835\udefc\u210e= 1; \ud835\udefc\u210eis the weight balancing among different \ud835\udc53\u210e(\u00b7)\u2019s specific to hop \u210e; and \ud835\udc3bis the number of hops. Finally, \u00af z\ud835\udc61is the input of the decoder D (\u00b7) in Eq. (9). It is worthwhile to note that both v\ud835\udc61and q(\u210e) originate from the same user, offering two different perspectives of the same data. This dual perspective is beneficial for precisely capturing the collaborative signal. In other words, through the cross-attention mechanism in CAM-AE, high-order connectivities can significantly improve the reverse-denoising process, thereby ultimately enhancing recommendation accuracies. 3.4 Optimization In our learning model, the denoising transition \ud835\udc5d\ud835\udf03(u\ud835\udc61\u22121 |u\ud835\udc61, u\u2032 ) = N (u\ud835\udc61\u22121; \ud835\udf41\ud835\udf03(u\ud835\udc61, u\u2032,\ud835\udc61) , \ud835\udefd\ud835\udc61I) is forced to approximate the tractable distribution \ud835\udc5e(u\ud835\udc61\u22121 |u\ud835\udc61, u0 ) = N (u\ud835\udc61\u22121; \u02dc \ud835\udf41(u\ud835\udc61, u0) , \ud835\udefd\ud835\udc61I) (note that the mean \u02dc \ud835\udf41(u\ud835\udc61, u0) can be computed via Bayes\u2019 rule as shown in [15]: \ud835\udc5e(u\ud835\udc61\u22121 |u\ud835\udc61, u0 ) = \ud835\udc5e(u\ud835\udc61|u\ud835\udc61\u22121, u0 ) \ud835\udc5e(u\ud835\udc61\u22121|u0 ) \ud835\udc5e(u\ud835\udc61|u0 ) ). Following this approximation, we can generate u\ud835\udc61\u22121 from u\ud835\udc61progressively until u0 is reconstructed. Figure 2 visualizes a single denoising step from u\ud835\udc47to u\ud835\udc47\u22121, which is repeated \ud835\udc47times to obtain u0. To optimize the parameter \ud835\udf03, our model aims at minimizing the variational lower bound (VLB) [15, 18] for the observed user\u2013item interactions u0 alongside the following loss: LVLB = L0 + \u2211\ufe01\ud835\udc47 \ud835\udc61=2 L\ud835\udc61\u22121, (12) where L0 = E\ud835\udc5e[\u2212log\ud835\udc5d\ud835\udf03(u0 |u1, u\u2032 )] is the reconstruction term to recover the original interactions u0; and L\ud835\udc61\u22121 is the denoising matching term, regulating \ud835\udc5d\ud835\udf03(u\ud835\udc61\u22121 |u\ud835\udc61, u\u2032 ) to align with the tractable distribution \ud835\udc5e(u\ud835\udc61\u22121 |u\ud835\udc61, u0 ), served as the ground truth, and is given by L\ud835\udc61\u22121 = E\ud835\udc5e[\ud835\udc37KL (\ud835\udc5e(u\ud835\udc61\u22121 |u\ud835\udc61, u0 ) \u2225\ud835\udc5d\ud835\udf03(u\ud835\udc61\u22121 |u\ud835\udc61, u\u2032 ) )] = E\ud835\udc5e h 1 2\ud835\udefd\ud835\udc61 \u0002 \u2225\ud835\udf41\ud835\udf03(u\ud835\udc61, u\u2032,\ud835\udc61) \u2212\u02dc \ud835\udf41(u\ud835\udc61, u0)\u22252\u0003i , (13) where \ud835\udc37KL(\u00b7\u2225\u00b7) denotes the Kullback\u2013Leibler (KL) divergence between two distributions. 3.5 Theoretical Analyses In this subsection, we are interested in theoretically showing the efficiency of the CAM-AE model. In CAM-AE, we use an AE to generate embeddings, reducing computations to an acceptable level by controlling the embedding dimension \ud835\udc58. We first establish the following theorem, which analyzes that the potential difference incurred by using our low-complexity modules in CAM-AE is negligibly small compared to the (computationally more expensive) original cross-attention [39], defined as \ud835\udc34\ud835\udc61\ud835\udc61\ud835\udc52\ud835\udc5b\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5b(Q, K, V) := \ud835\udc60\ud835\udc5c\ud835\udc53\ud835\udc61\ud835\udc5a\ud835\udc4e\ud835\udc65 \u0012 QK\ud835\udc47 \u221a \ud835\udc51 \u0013 V, (14) SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Yu Hou, Jin-Duk Park, & Won-Yong Shin where Q = q \u02dc W\ud835\udc44 \ud835\udf03, K = k \u02dc W\ud835\udc3e \ud835\udf03, and V = v \u02dc W\ud835\udc49 \ud835\udf03. Here, q\u2208Rmax{|U|,|I|}\u00d7 \ud835\udc51 and {k, v} \u2208R|I|\u00d7\ud835\udc51are the embedding matrices and n \u02dc W\ud835\udc44 \ud835\udf03, \u02dc W\ud835\udc3e \ud835\udf03, \u02dc W\ud835\udc49 \ud835\udf03 o \u2208 R \ud835\udc51\u00d7\ud835\udc51are trainable parameters of \ud835\udc34\ud835\udc61\ud835\udc61\ud835\udc52\ud835\udc5b\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5bin Eq. (14). Theorem 1. Suppose that max {|U| , |I|} is sufficiently large. If \ud835\udc58\u22655ln (max {|U| , |I|})\u000e\u0000\ud835\udf002 \u2212\ud835\udf003\u0001, then there exist matrices E\ud835\udc44\u2208 R\ud835\udc58\u00d7max{|U|,|I|}, E\ud835\udc3e, E\ud835\udc49\u2208R\ud835\udc58\u00d7|I| and D \u2208R|I|\u00d7\ud835\udc58such that Pr \u00a9 \u00ad \u00ad \u00ad \u00ab \f \f \f \f \f \f \f \f v u u u u u t \r \r \r \r \r \r \r D \u00b7 \ud835\udc60\ud835\udc5c\ud835\udc53\ud835\udc61\ud835\udc5a\ud835\udc4e\ud835\udc65 \u0010 E\ud835\udc44AE\ud835\udc47 \ud835\udc3e \u0011 E\ud835\udc49V \ud835\udc60\ud835\udc5c\ud835\udc53\ud835\udc61\ud835\udc5a\ud835\udc4e\ud835\udc65(A) V \r \r \r \r \r \r \r \u22121 \f \f \f \f \f \f \f \f \u2264\ud835\udf00 \u00aa \u00ae \u00ae \u00ae \u00ac > 1 \u2212\ud835\udc5c(1) , (15) where Q \u2208Rmax{|U|,|I|}\u00d7\ud835\udc51and {K, V} \u2208R|I|\u00d7\ud835\udc51are the embedding matrices in the original cross-attention; A = QK\ud835\udc47 \u221a \ud835\udc51; and \ud835\udf00> 0 is an arbitrarily small constant. The proof of Theorem 1 is omitted due to page limitations. Theorem 1 implies that the probability that the two terms \ud835\udc60\ud835\udc5c\ud835\udc53\ud835\udc61\ud835\udc5a\ud835\udc4e\ud835\udc65(A) V and D\u00b7\ud835\udc60\ud835\udc5c\ud835\udc53\ud835\udc61\ud835\udc5a\ud835\udc4e\ud835\udc65 \u0010 E\ud835\udc44AE\ud835\udc47 \ud835\udc3e \u0011 E\ud835\udc49V are approximately equal approaches one asymptotically when the maximum value of |U| and |I| is sufficiently large. We are capable of bridging this theorem and our CAM-AE model by setting E\ud835\udc49V = V\ud835\udc61, E\ud835\udc3eK = K\ud835\udc61, E\ud835\udc44Q = Q(\u210e), and D as in Eq. (9), where E\ud835\udc44= E\u210e, E\ud835\udc3e= E1, and E\ud835\udc49= E1, which leads to the conclusion that the term D \u00b7 \ud835\udc60\ud835\udc5c\ud835\udc53\ud835\udc61\ud835\udc5a\ud835\udc4e\ud835\udc65 \u0010 E\ud835\udc44AE\ud835\udc47 \ud835\udc3e \u0011 E\ud835\udc49V in Eq. (15) is equivalent to D \u00b7 \ud835\udc34\ud835\udc61\ud835\udc61\ud835\udc52\ud835\udc5b\ud835\udc61\ud835\udc56\ud835\udc5c\ud835\udc5b\u210e \u0010 Q(\u210e), K\ud835\udc61, V\ud835\udc61 \u0011 . From Theorem 1, one can see that the original cross-attention can be effectively approximated by our low-complexity modules in CAM-AE, which combines the cross-attention mechanism with linear transformations, thus significantly reducing the computational complexity (which is to be empirically validated later). In other words, we can control \ud835\udc58to maintain the amount of computations manageable while ensuring that the embeddings generated by our model closely approximate those from the original cross-attention, especially for large max {|U| , |I|}. Additionally, to validate the scalability of the CAM-AE model, we analytically show its computational complexity during training by establishing the following theorem. Theorem 2. The computational complexity of CF-Diff training, including both the computation time of the forward-diffusion process and the training time of the reverse-denoising process, is given by O (max {|U| , |I|}). The proof of Theorem 2 is omitted due to page limitations. From Theorem 2, one can see that the computational complexity required to train CF-Diff scales linearly with the maximum between the number of users and the number of items. This is because we are capable of considerably reducing the computation of Eq. (10) (corresponding to the cross-attention part in Figure 2) by controlling the embedding dimension \ud835\udc58. 4 EXPERIMENTAL EVALUATION In this section, we systematically conduct extensive experiments to answer the following five key research questions (RQs): \u2022 RQ1: How much does CF-Diff improve the top-\ud835\udc3erecommendation over benchmark recommendation methods? Table 1: The statistics of three datasets. Dataset # of users # of items # of interactions MovieLens-1M 5,949 2,810 571,531 Yelp 54,574 34,395 1,402,736 Anime 73,515 11,200 7,813,737 \u2022 RQ2: How does each component in CAM-AE contribute to the recommendation accuracy? \u2022 RQ3: How many hops in CF-Diff benefit for the recommendation accuracy? \u2022 RQ4: How do key parameters of CAM-AE affect the performance of CF-Diff? \u2022 RQ5: How scalable is CF-Diff when the size of datasets increases? 4.1 Experimental Settings Datasets. We conduct our experiments on three real-world datasets widely adopted for evaluating the performance of recommender systems, which include MovieLens-1M (ML-1M)5, and two larger datasets, Yelp6 and Anime7. Table 1 provides a summary of the statistics for each dataset. Competitors To comprehensively demonstrate the superiority of CF-Diff, we present nine recommendation methods, including five general benchmark CF methods (NGCF [44], LightGCN [13], SGL [45], NCL [23], and BSPM [8]) and four generative model-based recommendation methods (CFGAN [6], MultiDAE [47], RecVAE [35], and DiffRec [43]). Performance metrics. We follow the full-ranking protocol [13] by ranking all the non-interacted items for each user. In our study, we adopt two widely used ranking metrics, Recall@\ud835\udc3e(R@\ud835\udc3e) and NDCG@\ud835\udc3e(N@\ud835\udc3e), where \ud835\udc3e\u2208{10, 20}. Implementation details. We use the best hyperparameters of competitors and CF-Diff obtained by extensive hyperparameter tuning on the validation set. We use the Adam optimizer [17], where the batch size is selected in the range of {32, 64, 128, 256}. In CF-Diff, the hyperparameters used in the diffusion model (e.g., the noise scale \ud835\udefd\ud835\udc61 and the diffusion step \ud835\udc47) essentially follow the settings in [43]. We choose the best hyperparameters in the following ranges: {1, 2, 3, 4} for the number of hops, \ud835\udc3b; {512, 1024, 2048} for the latent dimension\ud835\udc58in the attention-aided AE module; and {16, 32, 64, 128} for the expanded dimension \ud835\udc51, {1, 2, 3, 4} for the number of layers, \ud835\udc41, and {0.3, 0.5, 0.7} for \ud835\udefc\u210e\u2019s in the multi-hop cross-attention module. All experiments are carried out with Intel (R) 12-Core (TM) E5-1650 v4 CPUs @ 3.60 GHz and GPU of NVIDIA GeForce RTX 3080. The code of CF-Diff is available at https://github.com/jackfrost168/CF_Diff. 4.2 Results and Analyses In RQ1\u2013RQ3, we provide experimental results on all datasets. For RQ4, we show here only the results on ML-1M in terms of N@\ud835\udc3edue to space limitations, since the results on other datasets and metrics showed similar tendencies to those on ML-1M. Additionally, we highlight the best and second-best performers in each case of the following tables in bold and underline, respectively. 5https://grouplens.org/datasets/movielens/1m/. 6https://www.yelp.com/dataset/. 7https://www.kaggle.com/datasets/CooperUnion/anime-recommendations-database. Collaborative Filtering Based on Diffusion Models: Unveiling the Potential of High-Order Connectivity SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Table 2: Performance comparison among CF-Diff and nine recommendation competitors for the three benchmark datasets. Here, the best and second-best performers are highlighted by bold and underline, respectively. ML-1M Yelp Anime Method R@10 R@20 N@10 N@20 R@10 R@20 N@10 N@20 R@10 R@20 N@10 N@20 NGCF 0.0864 0.1484 0.0805 0.1008 0.0428 0.0726 0.0255 0.0345 0.1924 0.2888 0.3515 0.3485 LightGCN 0.0824 0.1419 0.0793 0.0982 0.0505 0.0858 0.0312 0.0417 0.2071 0.3043 0.3937 0.3824 SGL 0.0806 0.1355 0.0799 0.0968 0.0564 0.0944 0.0346 0.0462 0.1994 0.2918 0.3748 0.3652 NCL 0.0878 0.1471 0.0819 0.1011 0.0535 0.0906 0.0326 0.0438 0.2063 0.3047 0.3915 0.3819 BSPM 0.0884 0.1494 0.0750 0.0957 0.0565 0.0932 0.0331 0.0439 0.2054 0.3103 0.4355 0.4231 CFGAN 0.0684 0.1181 0.0663 0.0828 0.0206 0.0347 0.0129 0.0172 0.1946 0.2889 0.4601 0.4289 MultiDAE 0.0769 0.1335 0.0737 0.0919 0.0531 0.0876 0.0316 0.0421 0.2142 0.3085 0.4177 0.4125 RecVAE 0.0835 0.1422 0.0769 0.0963 0.0493 0.0824 0.0303 0.0403 0.2137 0.3068 0.4105 0.4068 DiffRec 0.1021 0.1763 0.0877 0.1131 0.0554 0.0914 0.0343 0.0452 0.2104 0.3012 0.5047 0.4649 CF-Diff 0.1077 0.1843 0.0912 0.1176 0.0585 0.0962 0.0368 0.0480 0.2191 0.3155 0.5152 0.4748 4.2.1 Comparison with nine recommendation competitors (RQ1). We validate the superiority of CF-Diff over nine recommendation competitors through extensive experiments on the three datasets. Table 2 summarizes the results, and we make the following insightful observations. (1) Our CF-Diff consistently and significantly outperforms all recommendation competitors regardless of the datasets and the performance metrics. (2) The second-best performer tends to be DiffRec. Its superior performance among other generative model-based methods can be attributed to the use of diffusion models, known for their state-of-the-art performance in various fields. This enables DiffRec to more intricately recover user\u2013item interactions for recommendations compared to VAE-based CF methods. However, DiffRec is consistently inferior to CF-Diff, primarily because it overlooks the high-order connectivity information, which is essential for capturing crucial collaborative signals. (3) The performance gap between CF-Diff (\ud835\udc4b) and Diffrec (\ud835\udc4c) is the largest when the Yelp dataset is used; the maximum improvement rate of 7.29% is achieved in terms of N@10, where the improvement rate (%) is given by \ud835\udc4b\u2212\ud835\udc4c \ud835\udc4c \u00d7 100. (4) Compared with GNN-based methods (NGCF, LightGCN, SGL, and NCL) that exploit high-order connectivity information through the message passing mechanism, our CF-Diff method exhibits remarkable gains. This superiority basically stems from the ability of inherently powerful diffusion models and avoiding the over-smoothing issue when integrating high-order connectivities. (5) CFGAN shows relatively lower accuracies compared to other generative model-based methods. This performance degradation is caused by mode collapse during GAN training, resulting in inferior recommendation outcomes. 4.2.2 Impact of components in CAM-AE (RQ2). To discover what role each component plays in the success of our learning model, CAM-AE, we conduct an ablation study by removing or replacing each component in CAM-AE. \u2022 CAM-AE: corresponds to the original CAM-AE model. \u2022 CAM-AE-att: removes the multi-hop cross-attention module in CAM-AE. Table 3: Performance comparison among CAM-AE and its three variants. Here, the best and second-best performers are highlighted by bold and underline, respectively. Dataset Method R@10 R@20 N@10 N@20 ML-1M CAM-AE-att 0.1016 0.1751 0.0873 0.1123 CAM-AE-ae 0.1024 0.1732 0.0871 0.1117 CAM-AE-self 0.1057 0.1794 0.0891 0.1144 CAM-AE 0.1077 0.1843 0.0912 0.1176 Yelp CAM-AE-att 0.0553 0.0905 0.0342 0.0448 CAM-AE-ae OOM OOM OOM OOM CAM-AE-self 0.0574 0.0952 0.0355 0.0469 CAM-AE 0.0585 0.0962 0.0368 0.0480 Anime CAM-AE-att 0.2091 0.3024 0.5023 0.4623 CAM-AE-ae OOM OOM OOM OOM CAM-AE-self 0.2112 0.3094 0.5079 0.4678 CAM-AE 0.2191 0.3155 0.5152 0.4748 \u2022 CAM-AE-ae: removes the attention-aided AE in CAM-AE. \u2022 CAM-AE-self: replaces the multi-hop cross-attention module in CAM-AE with the multi-hop self-attention module, which ignores the high-order connectivity information (by replacing q(\u210e) with v\ud835\udc61). The performance comparison among the original CAM-AE and its three variants is presented in Table 3 with respect to R@\ud835\udc3eand N@\ud835\udc3eon the three datasets. Our findings are as follows: (1) The original CAM-AE always exhibits substantial gains over the other variants, which demonstrates that each component in CAM-AE plays a crucial role in enhancing the recommendation accuracy. (2) CAM-AE outperforms CAM-AE-att, which can be attributed to the fact that the multi-hop cross-attention module is capable of infusing high-order connectivities into the proposed method to improve the performance of recommendations via CF. (3) The performance gain of CAM-AE over CAM-AE-ae is relatively higher than that of the other variants for the ML-1M dataset. Additionally, the attention-aided AE\u2019s removal leads to out-of-memory (OOM) issues on the Yelp and Anime datasets, signifying its crucial role not only in extracting representations that can precisely capture the underlying SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Yu Hou, Jin-Duk Park, & Won-Yong Shin Table 4: Performance comparison according to different values of H. Here, the best and second-best performers are highlighted by bold and underline, respectively. Dataset Method R@10 R@20 N@10 N@20 ML-1M CF-Diff-2 0.1062 0.1786 0.0907 0.1164 CF-Diff-3 0.1077 0.1843 0.0912 0.1176 CF-Diff-4 0.1055 0.1764 0.0883 0.1134 Yelp CF-Diff-2 0.0572 0.0935 0.0351 0.0462 CF-Diff-3 0.0585 0.0962 0.0368 0.0480 CF-Diff-4 0.0561 0.0917 0.0347 0.0455 Anime CF-Diff-2 0.2191 0.3155 0.5152 0.4748 CF-Diff-3 0.2082 0.3021 0.4998 0.4586 CF-Diff-4 0.1938 0.2824 0.4605 0.4236 patterns of user\u2013item interactions but also in maintaining the computational complexity at acceptable levels. (4) CAM-AE is superior to CAM-AE-self. This confirms that infusing high-order connectivity information enriches the collaborative signal and thus results in performance enhancement even under a diffusion-model framework. 4.2.3 The impact of multi-hop neighbors (RQ3). To investigate how many hop neighbors in the CF-Diff method are informative, we present a variant of CF-Diff, CF-Diff-\ud835\udc3b, which considers up to \ud835\udc3b-hop neighbors constantly instead of optimally searching for the value of \ud835\udc3bdepending on a given dataset. The results are shown in Table 4 and our observations are as follows: (1) CF-Diff-3 outperforms CF-Diff-2 on ML-1M and Yelp, indicating that incorporating a wider range of neighboring nodes into the CAM-AE model can positively influence the recommendation results through CF. (2) CF-Diff-2 shows the highest recommendation accuracy on Anime. This means that 2-hop neighbors sufficiently capture the collaborative signal, and there is no need for exploiting higher-order connectivity information in this dataset. (3) Notably, there is a decline in the performance of CF-Diff-4, because infusing 4-hop neighbors introduces an excess of global connectivity information. This surplus information potentially acts as noise, thereby interfering the personalized recommendations. 4.2.4 The effect of hyperparameters (RQ4). We analyze the impact of key parameters of CAM-AE, including \ud835\udc58, \ud835\udc51, \ud835\udc41, and \ud835\udefc\u210e, on the recommendation accuracy for the ML-1M dataset. In this experiment, we consider 3-hop neighbors (i.e., \ud835\udc3b= 3). For notational convenience, we denote \ud835\udefc2 = \ud835\udefcand \ud835\udefc3 = 1 \u2212\ud835\udefc, which signify the importance of 2-hop and 3-hop neighbors, respectively. When a hyperparameter varies so that its effect is clearly revealed, other parameters are set to the following pivot values: \ud835\udc58= 500,\ud835\udc51= 16, \ud835\udc41= 2, \ud835\udefc= 0.7. Our findings are as follows: (Effect of \ud835\udc58) From Figure 4a, the maximum N@10 and N@20 are achieved at \ud835\udc58= 500 on ML-1M. It reveals that high values of \ud835\udc58 degrade the performance since the resulting embeddings contain more noise and low values of \ud835\udc58result in insufficient information during training. Hence, it is crucial to suitably determine the value of \ud835\udc58in guaranteeing satisfactory performance. 350 500 650 800 950 0.08 0.1 0.12 (a) Effect of \ud835\udc58 NDCG 8 16 24 32 0.08 0.1 0.12 (b) Effect of \ud835\udc51 NDCG N@10 N@20 Figure 4: The effect of hyperparameters \ud835\udc58and \ud835\udc51on N@K for the ML-1M dataset. 1 2 3 4 0.06 0.08 0.1 0.12 (a) Effect of \ud835\udc41 NDCG 0.3 0.5 0.7 0.9 0.06 0.08 0.1 0.12 (b) Effect of \ud835\udefc NDCG N@10 N@20 Figure 5: The effect of hyperparameters \ud835\udc41and \ud835\udefcon N@K for the ML-1M dataset. 1 \u00b7 104 3 \u00b7 104 6 \u00b7 104 9 \u00b7 104 0 20 40 60 80 100 (a) |U| Execution time (s) CF-Diff O (|U|) 1 \u00b7 104 6 \u00b7 104 13 \u00b7 104 20 \u00b7 104 10 12 14 16 18 20 (b) |I| Execution time (s) CF-Diff O (|I|) Figure 6: The computational complexity of CF-Diff, where the plots of the execution time versus |U| in Figure 6a and the execution time versus |I| in Figure 6b are shown. (Effect of \ud835\udc51) From Figure 4b, the maximum N@10 and N@20 are achieved at \ud835\udc51= 16 on ML-1M. Using values of \ud835\udc51that are too high and too low has a negative impact on the model\u2019s expressiveness. Thus, it is important to appropriately determine the value of \ud835\udc51 depending on the datasets. (Effect of \ud835\udc41) From Figure 5a, the maximum N@10 and N@20 are achieved at \ud835\udc41= 2 on ML-1M. A higher \ud835\udc41rather degrades the performance, possibly due to the over-fitting problem. Thus, the value of \ud835\udc41should be carefully chosen based on given datasets. (Effect of \ud835\udefc) Figure 5b shows that the maximum N@10 and N@20 are achieved at \ud835\udefc= 0.7 on ML-1M. Tuning \ud835\udefcis crucial since it directly determines the model\u2019s ability while balancing between neighbors that are different hops away from the target user, which in turn affects the recommendation performance. 4.2.5 Computational complexity (RQ5). To empirically validate the scalability of our CF-Diff method, we measure the execution time during training on synthetic datasets having user\u2013item interactions. These interactions are generated purely at random, simulating a sparsity level of 0.99, analogous to that observed on Yelp and Anime. By setting different |U|\u2019s and |I|\u2019s, we can create user\u2013item interactions of various sizes. More specifically, we generate two sets of user\u2013item interactions: in the first set, we generate a set of interactions with |I| = 1\ud835\udc524 and |U| = \b 1\ud835\udc524, 3\ud835\udc524, 4\ud835\udc524, 6\ud835\udc524, 7\ud835\udc524, 8\ud835\udc524, 9\ud835\udc524\t; and in the second set, we generate another set of interactions with |U| = 1\ud835\udc524 and |I| = \b 1\ud835\udc524, 4\ud835\udc524, 6\ud835\udc524, 8\ud835\udc524, 12\ud835\udc524, 16\ud835\udc524, 20\ud835\udc524\t. Figure 6a Collaborative Filtering Based on Diffusion Models: Unveiling the Potential of High-Order Connectivity SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA (resp. Figure 6b) illustrates the execution time (in seconds) per iteration of CF-Diff, including the forward-diffusion process and the reverse-denoising process, as the number of users (resp. the number of items) increases. The dashed line indicates a linear scaling in |U| and |I|, derived from Theorem 2. It can be seen that our empirical evaluation concurs with the theoretical analysis. 5 RELATED WORK In this section, we review some representative methods in two broad fields of research, including 1) benchmark CF methods and 2) generative model-based recommendation methods. 5.1 General Benchmark CF The most common paradigm of CF is to factorize the user\u2013item interaction matrix into lower-dimensional matrices [19, 28, 30]. The dot product in MF can be replaced with a multi-layer perceptron (MLP) to capture the non-linearities in the complex behavior of such interactions [7, 14]. To analyze beyond direct user connections to items, high-order connectivities are essential for understanding the user preferences, leading to the rise of GNNs in CF for modeling these complex relationships [4]. GC-MC [4] first proposed a graph AE framework for recommendations using message passing on the user\u2013item bipartite graph. NGCF [44] employed GNNs to propagate user and item embeddings on the bipartite graph capturing the collaborative signal in complex high-order connectivities. NIAGCN [38] was developed by taking into account both the relational information between neighboring nodes and the heterogeneous nature of the user\u2013item bipartite graph. LightGCN [13] improved the performance by lightweight message passing, omitting feature transformation and nonlinear activation. UltraGCN [26] advanced efficiency by skipping infinite layers of explicit message passing and directly approximating graph convolution limits with a constraint loss. BSPM [8] made a connection between the concept of blurring-sharpening process models and graph filtering [34], utilizing ordinary differential equations to model the perturbation and recovery of user\u2013item interactions. Additionally, contrastive learning was used to further improve the recommendation accuracy by taking node self-discrimination into account [23, 45]. 5.2 Generative Model-Based Recommendation GAN-based methods. Generative adversarial network (GAN)based models in CF employ a generator to estimate user\u2013item interaction probabilities, optimized through adversarial training [11, 12, 41, 46]. RecGAN [5] combined recurrent neural network (RNN) with GAN for capturing complex user\u2013item interaction patterns, while CFGAN [6] enhanced the recommendation accuracy with real-valued vector-wise adversarial learning. Nevertheless, adversarial training is often associated with training instability and mode collapse, potentially leading to suboptimal performance [1, 27]. VAE-based methods. The denoising AE (DAE) was firstly used for top-\ud835\udc3erecommendations, learning latent representations from corrupted user preferences [47]. CVAE [20] extended this by using a VAE to learn latent representations of items from ratings and multimedia content for multimedia recommendations. A series of VAE-based methods [22, 25, 35] were further developed for CF with implicit feedback, enhancing the accuracy, interpretability, and robustness by incorporating a multinomial likelihood and a Bayesian approach for user preference modeling. However, VAE-based models struggle to balance between simplicity and representations of complex data, with simpler models possibly failing to capture diverse user preferences and more complex models potentially being computationally intractable [36]. Diffusion model-based methods. Recently, diffusion models have achieved state-of-the-art performance in image generation by decomposing the image generation process into a series of DAEs. CODIGEM [40] extended this with the denoising diffusion probabilistic model (DDPM) in [15] to recommender systems, leveraging the intricate and non-linear patterns in the user\u2013item interaction matrix. Additionally, diffusion models have been successfully applied to sequential recommendations [21, 24, 48, 50]. Inspired by score-based generative models [37], DiffRec [43] accommodated diffusion models to predict unknown user\u2013item interactions in a denoising manner by gradually corrupting interaction histories with scheduled Gaussian noise and then recovering the original interactions iteratively through a neural network. Discussion. Despite the impressive performance of current diffusion model-based recommender systems, existing models overlook high-order user\u2013item connectivities that reveal co-preference patterns between users and items. These high-order connectivities among users and items are crucial in CF performed with limited direct user\u2013item interactions, aiding in delivering more precise and personalized recommendations. However, effectively incorporating such high-order connectivity information remains a significant challenge in diffusion model-based CF. 6 CONCLUSIONS In this paper, we explored an open yet fundamental problem of how to empower CF-based recommender systems when diffusion models are employed as a core framework for training. To tackle this challenge, we proposed CF-Diff, a diffusion model-based approach for generative recommender systems, designed to infuse high-order connectivity information into our own learning model, CAM-AE, while preserving the model\u2019s complexity at manageable levels. Through extensive experiments on three real-world benchmark datasets, we demonstrated (a) the superiority of CF-Diff over nine state-of-the-art recommendation methods while showing dramatic gains up to 7.29% in terms of NDCG@10 compared to the best competitor, (b) the theoretical findings that analytically confirm the computational tractability and scalability of CF-Diff, (c) the effectiveness of core components in CAM-AE, and (d) the impact of tuning key hyperparameters in CAM-AE. ACKNOWLEDGMENTS This work was supported by the National Research Foundation of Korea (NRF), Republic of Korea Grant by the Korean Government through MSIT under Grants 2021R1A2C3004345 and RS-202300220762 and by the Institute of Information and Communications Technology Planning and Evaluation (IITP), Republic of Korea Grant by the Korean Government through MSIT (6G Post-MAC\u2013 POsitioning and Spectrum-Aware intelligenT MAC for Computing and Communication Convergence) under Grant 2021-0-00347. SIGIR \u201924, July 14\u201318, 2024, Washington, DC, USA Yu Hou, Jin-Duk Park, & Won-Yong Shin" + }, + { + "url": "http://arxiv.org/abs/2404.12141v2", + "title": "MolCRAFT: Structure-Based Drug Design in Continuous Parameter Space", + "abstract": "Generative models for structure-based drug design (SBDD) have shown promising\nresults in recent years. Existing works mainly focus on how to generate\nmolecules with higher binding affinity, ignoring the feasibility prerequisites\nfor generated 3D poses and resulting in false positives. We conduct thorough\nstudies on key factors of ill-conformational problems when applying\nautoregressive methods and diffusion to SBDD, including mode collapse and\nhybrid continuous-discrete space. In this paper, we introduce MolCRAFT, the\nfirst SBDD model that operates in the continuous parameter space, together with\na novel noise reduced sampling strategy. Empirical results show that our model\nconsistently achieves superior performance in binding affinity with more stable\n3D structure, demonstrating our ability to accurately model interatomic\ninteractions. To our best knowledge, MolCRAFT is the first to achieve\nreference-level Vina Scores (-6.59 kcal/mol) with comparable molecular size,\noutperforming other strong baselines by a wide margin (-0.84 kcal/mol).", + "authors": "Yanru Qu, Keyue Qiu, Yuxuan Song, Jingjing Gong, Jiawei Han, Mingyue Zheng, Hao Zhou, Wei-Ying Ma", + "published": "2024-04-18", + "updated": "2024-04-23", + "primary_cat": "q-bio.BM", + "cats": [ + "q-bio.BM", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Structure-based drug design (SBDD) advances drug dis- covery by leveraging 3D structures of biological targets, thereby facilitating efficient and rational design of molecules within a certain chemical space of interests (Wang et al., 2022; Isert et al., 2023). In recent years, the generative model for molecules has emerged as a promising direction, which could streamline SBDD by directly proposing desired molecules, eliminating the need for exhaustive blind search in the vast space (Walters, 2019; Luo et al., 2021). Re- *Equal contribution 1University of Illinois Urbana-Champaign, USA 2Department of Computer Science and Technology, Ts- inghua University 3Institute for AI Industry Research (AIR), Tsinghua University 4Shanghai Institute of Materia Med- ica, Chinese Academy of Sciences. Correspondence to: Jingjing Gong , Hao Zhou . Preprint. Copyright 2024 by the author(s). cent progress in SBDD can be divided into two categories, i.e. auto-regressive models (Luo et al., 2021; Peng et al., 2022; Zhang et al., 2023) as next-token prediction for text generation, and diffusion models (Guan et al., 2022; 2023) as for image generation. The essential criteria for drug-like candidate molecules are outlined as follows: (i) high affinity towards specific binding sites (a.k.a, protein pockets), where a higher affinity indi- cates better performance, (ii) satisfactory drug-like prop- erties, such as synthesizability and drug-likeness scores, which often serve as thresholds for filtering out unfavor- able compounds (Ursu et al., 2011; Tian et al., 2015), and (iii) well-conformational 3D structure, which needs special attention for SBDD models, because they risk generating unrealistic molecular 3D conformations yet with deceptively high affinities. However, current generative models focus primarily on (i) and (ii), whereas we observe that the generated molecules often fail to meet all criteria simultaneously, especially for (iii) conformational stability. This challenge manifests as the False Positives phenomenon (FP) in generative modeling of SBDD, where models yield molecules that reside outside the true molecular manifold yet appear to exhibit good binding affinity after redocking. Specifically, these molecules suffer from distorted structure, displaying problematically unusual topology, and inferior binding mode, whereby the generated poses fail to capture true interactions and may even violate biophysical constraints, and thus go through post-fixes and significant rearrangements from docking software. Such problems threaten to jeopardize reliable model assessment, ultimately hindering their application in SBDD (Sec. 2.1). Both autoregressive and diffusion-based models exhibit chal- lenges with generating accurate molecular conformations, yet these issues stem from distinct causes. In Sec. 2.2, we delve into the mode collapse issue faced by autoregressive methods. Empirically, they tend to repeatedly generate a limited number of specific (sub-)structures due to an unnat- ural atom ordering imposed during generation. On the other hand, the problem with diffusion-based models is attributed to denoising in hybrid yet highly twisted space, which is a blend of discrete atomic types and continuous atomic coor- dinates. Different modalities need to be carefully handled 1 arXiv:2404.12141v2 [q-bio.BM] 23 Apr 2024 MolCRAFT: Structure-Based Drug Design in Continuous Parameter Space (a) Distorted Geometry AR strain: 653 kcal/mol strain: 700 kcal/mol DecompDiff strain: 1052 kcal/mol TargetDiff (b) Sub-optimal Binding FLAG RMSD: 5.11\u00a0\u00c5 DecompDiff RMSD: 7.73\u00a0\u00c5 Pocket2Mol RMSD: 6.66\u00a0\u00c5 (c) Generation Failure FLAG TargetDiff Figure 1: Typical resulting implausible molecules from gen- erative models. (a) Unusual 3-membered rings generated by AR, large fused rings with more than 7 atoms generated by diffusion models. (b) Examples of steric clashes by FLAG, and other ligand undergoing significant conformational rear- rangements upon redocking (Before: blue. After: green). (c) Failures in generation process. Left: atoms mis-connected in autoregressive sampling. Right: incomplete molecules with multiple components. in the hybrid space, and lack of consideration might result in severely strained and infeasible outputs (Sec. 2.3). Notably, DecompDiff (Guan et al., 2023) proposes to in- ject the molecular inductive bias by manually decomposing ligands into arms and scaffolds priors before training, and utilizing validity guidance in sampling. However, it cannot fully address the ill-conformational problem, since the in- ductive bias is simply impossible to enumerate. As shown in Fig. 2, for common C-N and C-O bond with two modes of typical length distribution, nearly all SBDD models are struggling to fit this substructural pattern. More visualiza- tion results can be referred to in Fig. 8, 9, 10, Appendix D. In order to capture the complicated data manifold for molecules, we take a shift to a unified continuous parameter space instead of a hybrid space, inspired by Graves et al. (2023). We propose MolCRAFT (Continuous paRAmeter space Facilitated molecular generaTion), which not only alleviates the mode collapse issue by non-autoregressive generation as in its diffusion counterparts, but also addresses the continuous-discrete gap by applying continuous noise and smooth transformation, leading to high-affinity as well as well-conformational drug candidates. Our contributions can be summarized as follows: \u2022 We investigate the false positive phenomenons of cur- rent SBDD models, and identify several key problems including the mode collapse of autoregressive meth- ods, and the gap of continuous-discrete space when applying diffusion models. \u2022 We propose MolCRAFT to address these two issues, which is a unified SE-(3) equivariant generative model, equipped with sampling in the parameter space that avoids further noise. \u2022 We conduct comprehensive evaluation under controlled molecular sizes. Experiments show that our model generates high-affinity binders with feasible 3D poses. To our best knowledge, we are the first to achieve reference-level Vina Scores (-6.59 kcal/mol, com- pared to reference -6.36 kcal/mol) with comparable molecule size, outperforming other strong baselines by a wide margin (-0.84 kcal/mol).", + "main_content": "We provide an overview of current obstacles in pocket-based generation. We summarize common failures in Sec. 2.1, and then investigate the underlying problems, i.e. the mode collapse issue of autoregressive-based models in Sec. 2.2, and hybrid denoising issue of diffusion-based models in Sec. 2.3. Based on the aforementioned challenges, we propose to generate molecules in the continuous parameter space. 2.1. Failure Modes of Generated Molecules As shown in Fig. 1, we divide undesired molecules in SBDD into three categories: (a) Distorted geometry. We visualize the generated molecules at median strain energy (see Table 2), and models tend to produce either too many uncommon 3or 4-member rings, or extra-large rings with unstable structures, leading to much higher strain energy. (b) Inferior binding mode. We observe a notable number structures, leading to much higher strain energy. (b) Inferior binding mode. We observe a notable number of generated ligand conformations rearrange drastically after redocking, with some even violating biophysical constraints and producing steric clashes with the protein surface. This suggests that 3D SBDD models do not capture true interatomic interactions and rely on post-fixing via redocking as noted by Harris et al. (2023), which severely harms the credibility of generating molecules directly in 3D space. (c) Generation failure. Autoregressive models tend to ating molecules directly in 3D space. (c) Generation failure. Autoregressive models tend to misplace an element and terminate prematurely, while diffusion models might generate incomplete molecules with disconnected parts, limiting sample efficiency. The above problems hinder the applicability of SBDD models. In the following sections, we provide deeper understanding of the problematic methods underlying these failures. 2 MolCRAFT: Structure-Based Drug Design in Continuous Parameter Space 1.1 1.2 1.3 1.4 1.5 1.6 1.7 Bond length (\u00c5) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Density Reference CC C:C CO CN C:N 1.1 1.2 1.3 1.4 1.5 1.6 1.7 Bond length (\u00c5) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Density AR CC C:C CO CN C:N 1.1 1.2 1.3 1.4 1.5 1.6 1.7 Bond length (\u00c5) 0.00 0.02 0.04 0.06 Density Pocket2Mol CC C:C CO CN C:N 1.1 1.2 1.3 1.4 1.5 1.6 1.7 Bond length (\u00c5) 0.0 0.1 0.2 0.3 0.4 0.5 Density FLAG CC C:C CO CN C:N 1.1 1.2 1.3 1.4 1.5 1.6 1.7 Bond length (\u00c5) 0.00 0.05 0.10 0.15 Density T argetDiff CC C:C CO CN C:N 1.1 1.2 1.3 1.4 1.5 1.6 1.7 Bond length (\u00c5) 0.000 0.025 0.050 0.075 0.100 0.125 0.150 Density DecompDiff-O CC C:C CO CN C:N 1.1 1.2 1.3 1.4 1.5 1.6 1.7 Bond length (\u00c5) 0.000 0.025 0.050 0.075 0.100 0.125 0.150 Density DecompDiff-R CC C:C CO CN C:N 1.1 1.2 1.3 1.4 1.5 1.6 1.7 Bond length (\u00c5) 0.00 0.05 0.10 0.15 0.20 0.25 0.30 Density Ours CC C:C CO CN C:N Figure 2: Bond length distribution of reference and generated molecules by autoregressive models (upper row) and nonautoregressive models (lower row) for top-5 frequent bond types. Table 1: Percentage (%) of molecular modes in terms of distribution and substructures. Note: Fused refers to 80 specific rings, 3-Ring denotes three-membered rings, and so on. Highly deviated values are highlighted in bold Italic. Unique Fused 3-Ring 4-Ring 5-Ring 6-Ring Reference 30.0 4.0 0.0 49.0 84.0 Train 21.6 3.8 0.6 56.1 90.9 AR 36.2 39.7 50.8 0.8 35.8 71.9 Pocket2Mol 73.7 52.0 0.3 0.1 38.0 88.6 FLAG 99.7 42.4 3.1 0.0 39.9 84.7 TargetDiff 99.6 37.8 0.0 7.3 57.0 76.1 Decomp-O 61.6 13.1 9.0 11.4 64.0 83.3 Decomp-R 50.3 28.1 5.4 8.3 51.5 65.6 Ours 97.7 30.9 0.0 0.6 47.0 85.1 2.2. Molecular Mode Collapse The mode collapse issue focuses on the empirical performance of SBDD methods that tend to generate a limited number of specific (sub-)structures, where atom-based autoregressive models have displayed a particular preference for certain modes. We provide quantitative results from both the chemical and geometrical perspectives. Chemical assessment is shown in Table 1. In order to measure molecular distribution, we report the percentage of unique samples (Unique) averaged on different pockets.1 It can be seen that the ratio of unique molecules of AR (Luo et al., 2021) and Pocket2Mol (Peng et al., 2022) is considerably lower than other counterparts. Moreover, DecompDiff (Guan et al., 2023) is also found to generate repeated molecules, possibly due to its use of prior clusters At the substructural level, we report the percentage of molecules with certain types of rings defined by Jiang et al. (2024), 1Here we remove all post-filters from autoregressive models that avoid generating duplicate or invalid molecules, in order to faithfully demonstrate their performances. In all other experiments, we stick to the original implementation. with respect to all ring-structured molecules. Pocket2Mol displays a preference for more fused rings as also noted by Harris et al. (2023), while AR exhibits an obvious pattern in generating repeated three-membered rings. Geometrically measured, as shown in Fig. 2, atom-based autoregressive methods model the bond lengths for different bond types similarly, where reference distribution is multimodal and varies across different types, while Pocket2Mol only captures a single mode, and for AR different bond lengths are distributed in a very similar fashion. FLAG (Zhang et al., 2023) generates fragment-by-fragment, which avoids collapsing by explicitly incorporating optimal and diverse substructures. But it suffers from more severe error accumulation, resulting in significant steric clashes and undesirable Vina Score (see Sec. 5.2). Generally speaking, autoregressive models are still trapped in sub-optimal performance. Intuitively, such limitations could be attributed to an unnatural atom ordering imposed during generation. 0.0 0.2 0.4 0.6 0.8 1.0 Generative Process 0.0 0.2 0.4 0.6 0.8 1.0 Validity ( ) 0.0 0.2 0.4 0.6 0.8 1.0 Generative Process 0.0 0.2 0.4 0.6 0.8 1.0 Completeness ( ) T argetDiff Decomp-O Decomp-R Ours Figure 3: Percentage of valid, complete molecules in the trajectories during generative process. 2.3. Hybrid Continuous-Discrete Space Diffusion-based models, on the other hand, successfully alleviate mode collapse problem via non-autoregressive generation in terms of substructural distribution (see Fig. 2). However, the inconsistency between different modalities has 3 MolCRAFT: Structure-Based Drug Design in Continuous Parameter Space SE-(3) NN \u00a0 KL-div Update \u00a0 Parameter Space ...... Add Noise Reduce Noise Add Noise Sample Space Figure 4: Overall Architecture. long troubled molecular generation models, as suggested by MolDiff (Peng et al., 2023) and EquiFM (Song et al., 2024b), where a careful design of either different noise levels or different probability paths is required. A key insight is that the hybrid continuous-discrete space poses challenges to accurately capture the complicated data manifold for molecules, where the sample space in diffusion models is exposed to high variance, and the intermediate noisy latent is very likely to go outside the manifold. Inspired by GeoBFN (Song et al., 2024a), we propose to operate within the fully continuous pamarater space, which enables considerably lower input variance and a smooth transformation towards the target distribution. To further illustrate the difference between continuousdiscrete diffusion and our fully continuous MolCRAFT, we sample 10 molecules for each of the 100 test proteins, and plot the curves of the ratio of valid molecules, complete molecules against different timesteps during sampling. As shown in Fig. 3, continuous-discrete diffusions heavily rely on the latter steps, passing a certain validity and completeness threshold in the final 60%-90% stage where noise scales are lower, while MolCRAFT approaches target distribution far earlier (in the first 20%-40% steps), thereby possessing greater capacity to progressively refine and adjust the generated feasible structures, resulting in better conformations. 3. Preliminary In this section, we briefly overview Bayesian Flow Networks (BFN) (Graves et al., 2023) in comparison with diffusion models for SBDD. For its detailed formulation and mathematical details, we refer readers to Appendix A. 3.1. Problem Definition Structure-based Drug Design (SBDD) can be formulated as a conditional generation task. Given input protein binding site P = {(x(i) P , v(i) P )}NP i=1, which contains NP atoms with each x(i) P \u2208R3 and v(i) P \u2208RDP correspond to atom coordinates and atom features, respectively (e.g., element types, backbone or side chain indicator). The output is a ligand molecule M = {(x(i) M , v(i) M )}NM i=1, where x(i) M \u2208R3 and v(i) M \u2208RDM , NM is the number of atoms in molecule. For convenience, we denote p = [xP , vP ], (xP \u2208RNP \u00d73, vP \u2208RNP \u00d7DP ) and m = [xM, vM], (xM \u2208RNM\u00d73, vM \u2208RNM\u00d7DM ) as the concatenation of all protein or ligand atoms. 3.2. Molecular Generation in Parameter Space The overall architecture of MolCRAFT are shown in Fig. 4. The generative process is viewed as message exchanges between a sender and a receiver, where the sender is only visible in sample space, and the receiver makes the guess from its understanding of samples and parameters. In every round of communication, the sender selects a molecule datapoint m, adds noise for timestep ti according to sender distribution pS(yi | m; \u03b1i), and sends the noisy latent y to receiver, resembling the forward diffusion process. Here \u03b1i is a noise factor from the schedule \u03b2(ti). The receiver, on the other hand, outputs the reconstructed molecule \u02c6 m based on its previous knowledge of parameters \u03b8, yielding output distribution pO. With the sender\u2019s noisy factor \u03b1 known, the receiver can also add noise to the estimated output and give the predicted noisy latent, arriving at receiver distribution pR. pR(yi | \u03b8i\u22121, p; ti) = E \u02c6 m\u223cpOpS(yi | \u02c6 m; \u03b1i), (1) where pO( \u02c6 m | \u03b8i\u22121, p; ti) = \u03a6(\u03b8i\u22121, p, ti). (2) \u03a6 is a neural network which is expected to reconstruct clean sample \u02c6 m given parameters \u03b8i\u22121, pocket p and time ti. The key difference between BFN and diffusion lies in its introduction of parameters. Thanks to structured Bayesian updates defined via Bayesian inference, the receiver is able to maintain fully continuous parameters and perform closedform update on its belief of parameters. Bayesian update distribution pU stems from the Bayesian update function h, pU(\u03b8i | \u03b8i\u22121, m, p; \u03b1i) = E y\u2032 i\u223cpS \u03b4 \u0010 \u03b8i \u2212h(\u03b8i\u22121, yi, \u03b1i) \u0011 , (3) where \u03b4(\u00b7) is Dirac delta distribution. The parameter space enables arbitrarily applying noise as long as the Bayesian update is tractable, and eliminates the need to invert a predefined forward process as in diffusion models. According to the nice additive property of accuracy (Graves et al., 2023), the Bayesian flow distribution pF could be obtained to achieve simulation-free training, once teacher 4 MolCRAFT: Structure-Based Drug Design in Continuous Parameter Space forcing with m is applied: pF (\u03b8i | m, p; ti) = E \u03b81...i\u22121\u223cpUpU(\u03b8i | \u03b8i\u22121, m, p; \u03b1i) = pU(\u03b8i | \u03b80, m, p; \u03b2(ti)) (4) Therefore, the training objective for n steps is to minimize: Ln(m, p) = E i\u223cU(1,n) E yi\u223cpS,\u03b8i\u22121\u223cpFDKL(pS \u2225pR). (5) 4. Methodology We introduce our proposed MolCRAFT in as follows: in Sec. 4.1, we demonstrate how to model continuous atom coordinates and discrete atom types within BFN framework, with the guarantee of SE-(3) equivariance for molecular data. Then in Sec. 4.2, we elaborate our novel sampling strategy tailored for the parameter space. Within the fully continuous and differentiable space, MolCRAFT is able to capture the global connection between different modalities, and sample efficiently with low variance. 4.1. Resolving Different Modalities in Parameter Space This section demonstrates how to resolve continuous atom coordinates and discrete atom types in parameter space. Unified parameter \u03b8 def := [\u03b8x, \u03b8v] Following Hoogeboom et al. (2022), continuous atom coordinates x are characterized by Gaussian distribution N(x | \u00b5, \u03c1\u22121I), and we set \u03b8x = {\u00b5, \u03c1}, where \u00b5 is learned and \u03c1 is predefined by noise factor \u03b1. The Bayesian update function {\u00b5i, \u03c1i} \u2190h( \b \u00b5i\u22121, \u03c1i\u22121 \t , yx, \u03b1i) is defined as: \u03c1i = \u03c1i\u22121 + \u03b1i (6) \u00b5i = \u00b5i\u22121\u03c1i\u22121 + yx\u03b1i \u03c1i (7) For discrete atom types v, we use a categorical distribution \u03b8v \u2208RNM\u00d7K, and update it given \u03b1\u2032 via h(\u03b8v i\u22121, yv, \u03b1\u2032 i) def := eyv\u03b8v i\u22121 PK k=1 eyv k(\u03b8v i\u22121)k (8) For prior \u03b80, we adopt standard Gaussian and uniform distribution respectively, following Graves et al. (2023). Applying noise for different modalities Thanks to the continuous nature of parameters, we are able to apply the following continuous noise even for discrete atom types, instantiating the sender distribution pS: pS(yx | xM; \u03b1) = N(yx | xM, \u03b1\u22121I) (9) pS(yv | vM; \u03b1\u2032) = N \u0010 yv | \u03b1\u2032(KevM \u22121), \u03b1\u2032KI \u0011 (10) where evM = h ev(1) M , . . . , ev(K) M i \u2208RNM\u00d7K, ej \u2208RK is the projection from the class index j to the length-K one-hot vector, and K the number of atom types. Note that we could set different noise schedules for different modalities (\u03b1 for coordinates and \u03b1\u2032 for types) for more efficient training of the joint noise prediction network. Thereby for receiver distribution in Eq. 1, pR(yx | \u03b8x, p; t) = N(yx | \u03a6(\u03b8x, p, t), \u03b1\u22121I) (11) pR(yv | \u03b8v, p; t) = h pR \u0000(yv)(d)| \u00b7 \u0001i d=1...N, (12) where pR \u0010 (yv)(d)| \u00b7 \u0011 = P k pv O(k|\u00b7)pv S \u0010 (yv)(d)|k; \u03b1 \u0011 . SE-(3) equivariance We introduce a fundamental inductive bias for SBDD to BFN, i.e. the density should be invariant to translation and rotation of protein-molecule complex (Satorras et al., 2021; Xu et al., 2021; Hoogeboom et al., 2022), in the following proposition (proof in Appendix B). Proposition 4.1. Denote the SE-(3) transformation as Tg, the likelihood is invariant w.r.t. Tg on the protein-molecule complex: p\u03d5(Tg(m|p)) = p\u03d5(m|p) if we shift the Center of Mass (CoM) of protein atoms to zero and parameterize the output network \u03a6(\u03b8, p, t) with an SE-(3) equivariant network. 4.2. Noise Reduced Sampling in Parameter Space MolCRAFT addresses the high-variance discrete variable problem by maintaining a continuous probability mass function as beliefs of distributional parameters, which allows a smooth transformation towards the target distribution. This natural coherence with continuous coordinates gives us an advantage over continuous-discrete diffusion process. During sampling, original BFN shifts the denoising process from sample space (recall diffusion yi\u22121 \u2192yi) to parameter space (\u03b8i\u22121, yi) \u2192\u03b8i via Bayesian update function h, where the information flows in this direction: \u03b8i\u22121 \u03a6 \u2212 \u2192\u02c6 m pS \u2212 \u2192yi pU \u2212 \u2212 \u2192\u03b8i, (13) where pU(\u03b8i | \u03b8i\u22121, m, p; \u03b1i) is defined in Eq. 3, and m is set to estimated \u02c6 m drawn from pO in Eq. 2. It should be noted that the existing generative process of BFN, as well as that of diffusion models, performs continuous atom coordinates and discrete atom type sampling at each timestep. This risks introducing too much noise, and might end up generating incomplete molecules. To alleviate such a problem, we design an empirically effective sampling strategy, which operates within the parameter space, and thus avoids introducing further noise from sampling discrete variables. The graphical description becomes: 5 MolCRAFT: Structure-Based Drug Design in Continuous Parameter Space \u03b8i\u22121 \u03a6 \u2212 \u2192\u02c6 m pF \u2212 \u2212 \u2192\u03b8i (14) Specifically, denoting \u03b3(t) def := \u03b2(t) 1\u2212\u03b2(t), we update the parameter via Eq. 4, which simplifies to: pF (\u00b5 | \u02c6 x, p; t) = N \u0010 \u00b5 | \u03b3(t)\u02c6 x, \u03b3(t)(1 \u2212\u03b3(t))I \u0011 (15) pF (\u03b8v | \u02c6 v, p; t) = E N \u0000yv|\u03b2(t)(Ke\u02c6 v\u22121),\u03b2(t)KI \u0001\u03b4(\u03b8v \u2212softmax(yv)) (16) We use the estimated \u02c6 m = [\u02c6 x, \u02c6 v] (note that \u02c6 v directly takes the continuous output categorical values without sampling) to directly update parameter for the next step, bypassing the sampling of noisy data needed for Bayesian update \u03b8i = h(\u03b8i\u22121, y, \u03b1). The whole generative process happens in the parameter space except for the final step, which enjoys the advantage of lower variance and accelerates the overall generation path towards the complicated structure of molecules, with greatly improved sample quality at significantly fewer sampling steps, as shown in Fig. 7. Details of sampling are described in Algorithm 2. 5. Experiments 5.1. Experimental Setup Dataset We use the CrossDocked dataset (Francoeur et al., 2020a) for training and testing, which originally contains 22.5 million protein-ligand pairs, and after the RMSD-based filtering and 30% sequence identity split by Luo et al. (2021), results in 100,000 training pairs and 100 test proteins. For each test protein, we sample 100 molecules for evaluation. Baselines For autoregressive sampling-based models, we choose atom-based models AR (Luo et al., 2021), Pocket2Mol (Peng et al., 2022) and fragment-based model FLAG (Zhang et al., 2023). For diffusion-based models, we consider TargetDiff (Guan et al., 2022) and two variants of DecompDiff (Guan et al., 2023). Decomp-R uses the prior estimated from reference molecules in the test set, while Decomp-O selects the optimal prior from the reference prior and pocket prior, where the pocket prior center is predicted by AlphaSpace2 (Katigbak et al., 2020) and ligand atom number by a neural classifier. Evaluation We conduct a comprehensive evaluation of SBDD models on all 100 proteins in test set, including: \u2022 Binding Affinity. We employ AutoDock Vina (Trott & Olson, 2010) to measure binding affinity as it is a common practice (Luo et al., 2021; Peng et al., 2022; Guan et al., 2022; 2023), and report Vina Score, a direct score of generated pose, Vina Min, which scores the optimized pose after a local minimization of energy, and Vina Dock, the best possible score after re-docking, a global grid-based search optimization process. Therefore, it is highly favorable if Vina Score is close to Vina Min and Vina Dock, suggesting that the generated poses capture the 3D interaction well. \u2022 Conformation Stability. We measure the stability for ligand-only and binding complex conformation. For ligand-only, we use the Jensen-Shannon divergence (JSD) between reference and generated distributions of bond length, bond angle and torsion angle at substructure level, and for a more global view, we employ Strain Energy to evaluate the rationality of generated ligand conformation. For binding complex, we adopt Steric Clashes (Clash) to detect possible clashes in protein-ligand complex, following Harris et al. (2023). We further propose to evaluate symmetry-corrected RMSD between the generated ligand atoms and Vina redocked poses as the metric of binding mode consistency, where poses with an RMSD below 2 \u02da A is generally regarded as chemically meaningful (Alhossary et al., 2015; Hassan et al., 2017; McNutt et al., 2021). \u2022 Drug-like Properties. Drug-likeliness (QED), synthetic accessibility (SA), and diversity (Div) are adopted as molecular property metrics. \u2022 Overall. To evaluate the overall quality of generated molecules, we calculate the Binding Feasibility as the ratio of molecules with reasonable affinity (Vina Score < -2.49 kcal/mol) and stable conformation (strain energy < 836 kcal/mol, RMSD < 2 \u02da A) simultaneously, where the threshold values are set to the 95 percentile of the reference molecules. We also report Success Rate (Vina Dock < -8.18, QED > 0.25, SA > 0.59) following Long et al. (2022) and Guan et al. (2022). \u2022 Sample Efficiency. In order to make a practical comparison among non-autoregressive methods, we report the average Time and Generation Success, with the latter defined as the ratio of valid and complete molecules versus the intended number of samples. 5.2. Main Results Our main findings are listed as below: \u2022 MolCRAFT resembles and even surpasses the reference set in terms of binding affinity and overall feasibility, showing that we effectively learn the binding dynamics from protein-ligand complex distribution. \u2022 Non-autoregressive molecule generation could benefit from modeling in continuous parameter space, demonstrated by our performance in capturing diverse substructural modes and greatly improved conformation. \u2022 Reliable evaluation of SBDD ought to take molecule sizes into account. To achieve fair comparison, controlled experiment regarding molecule size is needed. 6 MolCRAFT: Structure-Based Drug Design in Continuous Parameter Space Table 2: Summary of different properties of reference and generated molecules under different sizes. (\u2191) / (\u2193) indicates larger / smaller is better. Top 2 results are highlighted with bold text and underlined text. Note: SE is short for Strain Energy, Div for Diversity, BF for Binding Feasibility, and SR for Success Rate. Methods Binding Affinity Conformation Stability Drug-like Properties Overall Ligand Complex Vina Score (\u2193) Vina Min (\u2193) Vina Dock (\u2193) SE (\u2193) Clash (\u2193) RMSD (\u2191) SA (\u2191) QED (\u2191) Div (\u2191) BF (\u2191) SR (\u2191) Size Avg. Med. Avg. Med. Avg. Med. 25% 75% Avg. % < 2 \u02da A Avg. Avg. Avg. (%) (%) Avg. Reference -6.36 -6.46 -6.71 -6.49 -7.45 -7.26 38 198 5.57 34.0 0.73 0.48 26.0 25.0 22.8 AR -5.75 -5.64 -6.18 -5.88 -6.75 -6.62 260 2287 4.36 36.5 0.63 0.51 0.70 16.1 6.9 17.7 Pocket2Mol -5.14 -4.70 -6.42 -5.82 -7.15 -6.79 102 373 6.10 32.0 0.76 0.57 0.69 23.8 24.4 17.7 FLAG 45.85 36.52 9.71 -2.43 -4.84 -5.56 25 4384 68.55 0.3 0.63 0.61 0.70 0.0 1.8 16.7 Ours-small -5.96 -5.89 -6.34 -6.04 -6.98 -6.63 44 275 4.77 39.5 0.74 0.52 0.74 33.3 17.4 17.8 TargetDiff -5.47 -6.30 -6.64 -6.83 -7.80 -7.91 368 13527 11.13 37.1 0.58 0.48 0.72 13.5 10.5 24.2 Decomp-R -5.19 -5.27 -6.03 -6.00 -7.03 -7.16 111 1217 7.92 24.2 0.66 0.51 0.73 14.6 14.9 21.2 Ours -6.59 -7.05 -7.24 -7.26 -7.80 -7.92 84 517 7.02 46.1 0.69 0.50 0.72 35.9 26.0 22.7 Decomp-O -5.67 -6.04 -7.04 -7.09 -8.39 -8.43 368 3876 13.76 27.2 0.61 0.45 0.68 11.1 24.5 29.4 Ours-large -6.61 -8.16 -8.14 -8.45 -9.21 -9.22 174 1079 10.87 45.0 0.62 0.46 0.61 31.1 36.6 29.4 Table 3: Summary of molecular conformation results. (\u2193) indicates smaller is better. Top 2 results are highlighted with bold text and underlined text. Note: JSD is calculated between distributions estimated from generated and reference molecules, we report the mean of all JSD values here. Methods Length (\u2193) Angle (\u2193) Torsion (\u2193) Avg. JSD Avg. JSD Avg. JSD AR 0.554 0.507 0.552 Pocket2Mol 0.485 0.482 0.459 FLAG 0.511 0.406 0.270 TargetDiff 0.382 0.435 0.400 Decomp-O 0.359 0.414 0.358 Decomp-R 0.348 0.412 0.317 Ours 0.319 0.379 0.300 Binding Affinity We report Vina metrics in Table 2. I. Our model consistently outperforms other strong baselines in affinities, achieving a reference-level Vina Score of -6.59 kcal/mol. As Vina Score directly scores the pose and Vina Min only optimizes locally, they directly measure the generated pose quality. To the best of our knowledge, MolCRAFT is the first to achieve reference-level affinity scores without significant rearrangements via redocking, which demonstrates our superiority in learning binding interactions for SBDD. II. Vina Dock can potentially be hacked by generating larger molecules. Intuitively, larger molecules have more chances of forming interactions with protein surfaces. With the largest molecule sizes, Decomp-O achieves the secondbest Vina Dock (-8.39 kcal/mol), far better than reference molecules. Further investigation reveals that Decomp-O gains an advantage by producing considerably larger outof-distribution (OOD) molecules and thereby brings up the highest possible affinity post-docking. For a fair comparison, we report variants of DecompDiff and MolCRAFT stratified by size, and with the same number of atoms as Decomp-O, our model consistently achieves SOTA affinities, underscoring its robustness across different molecular sizes. Conformation Stability We report the substructural level\u2019s average Jensen-Shannon divergence (JSD) between reference and generated bond length, angle and torsion angle distributions in Table 3 (detailed results for different bond/angle/torsion types in Appendix D). At the global structure level, we report strain energy for ligand-only conformational stability, and measure clashes in the binding complex, together with RMSD between generated and redocked poses in Table 2. I. Our model excels in modeling diverse local modes, and ranks first in bond length and angle distributions. Moreover, Fig. 2 shows MolCRAFT is the only model that captures two distinct modes for multi-modal C-C, C-N and C-O bond, justifying our choice of modeling in the joint continuous parameter space. More results are in Fig. 8, 9 and 10. II. Injecting substructural inductive bias helps to capture more modes. Fragment-based model FLAG displays the best torsion angle distribution, and prior-enhanced DecompDiff also exhibits relatively competitive performances in modeling molecular geometries, whereas other autoregressive models collapse into certain modes as in Fig. 2. III. For ligand-only stability, we greatly improve upon the strained conformations, even surpassing autoregressive methods. According to Table 2, our model is at least an order of magnitude better than diffusion-based counterparts, and is close to reference. While autoregressive methods generally display better strain energy, MolCRAFT still achieves superior performance under comparable molecule sizes. IV. Our binding complex contains fewer clashes and remains consistent after redocking. We achieve few steric clashes, and has the best RMSD performance, which means 46% of our molecules already resemble accurate docking pose even without force field optimization or redocking, rendering it reliable for generating molecules in 3D space. The reason why we achieve even better RMSD than reference could be explained by a distribution shift from the training set to the 7 MolCRAFT: Structure-Based Drug Design in Continuous Parameter Space 0.75 0.80 0.85 0.90 0.95 Generation Success 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 No. Generated Samples Per Second Ours AR Pocket2Mol FLAG T argetDiff Decomp-O Decomp-R Figure 5: Sample efficiency, where Generation Success means the generated molecules are both valid and complete. reference set. In the construction of dataset2, the training set contains 52.4% docked molecules, while the test set only contains 37.0% docked ones, which aligns with the phenomenon that there are only 34.0% reference molecules with RMSD < 2 \u02da A upon redocking. This accounts for why MolCRAFT has more consistent and high-affinity binders, which effectively captures the training set distribution and learns the binding dynamics. Overall We report the overall feasible rate and success rate in Table 2. MolCRAFT achieves the best among all, demonstrating our competency in generating molecules with high affinity and stable conformation. Our method captures the interatomic interactions in 3D space, and proposes desirable molecules without relying on post-fixed docking poses. This further validates our choice of learning in the continuous parameter space. Sampling performance We compare the generation speed (average time for generating 100 samples) and generation success in Figure 5. We achieve SOTA sampling performance in both dimensions, generating more complete (96.7%) molecules at 30\u00d7 speedup. While it takes on average 3428s and 6189s for TargetDiff and DecompDiff to generate 100 samples respectively, our model only uses 141s, thanks to our improved sampling strategy (see Sec. 5.3). 5.3. Ablation Study of Sampling Strategy Considering that we propose the first-of-its-kind SBDD model that operates in the fully continuous parameter space, and present a noise-reduced sampling approach adapted to the space, we conduct ablation study that validates our design, showing a performance boost from Vina Score/Min of -5.42/-6.30 kcal/mol to -6.51/-7.13 kcal/mol. 2There are two kinds of 3D ligand poses in the dataset, i.e. Vina minimized poses in the given receptor, and Vina docked poses. https://github.com/gnina/models/ tree/master/data/CrossDocked2020 We test different sampling strategies with different steps for the same checkpoint, and sample 10 molecules each for 100 test proteins. We plot the curves of QED, SA, Completeness (\u2191) and Vina Score (\u2193) in Figure 7, Appendix D.2. As the sampling step increases to training steps, we found the original sampling strategy exhibits first enhanced then slightly decreased sample quality, possibly because the update of parameters is smoothed or oversmoothed by finer partitioned noise factor \u03b1, whereas the noise reduced strategy displays this tendency far earlier and generates the best quality of molecules with fewer sampling steps, indicating its high efficiency. Considering the overall sample quality, we decide to use 100 sampling steps for our model, which is 10\u00d7 faster than sampling at original 1000 training steps. 6. Related Work Target-Aware Molecule Generation Trained on proteinligand complex data, target-aware methods directly model the interaction between protein pockets and ligands. Early attempts are based on 1D SMILES or 2D molecular graph generation (Bjerrum & Threlfall, 2017; G\u00b4 omez-Bombarelli et al., 2018; Segler et al., 2018) and fail to consider spatial information. Recent works focus on 3D molecule generation, and there are mainly two fashions: (1) Autoregressive methods. For atom-based methods, LiGAN (Masuda et al., 2020) and AR (Luo et al., 2021) adopt an atomic density grid view of molecules, the former predicting a voxelized density grid and performing optimization to reconstruct atom types and coordinates, the latter assigning atomic probability to each voxel and utilizes MCMC to generate atom-by-atom. GraphBP (Liu et al., 2022) uses normalizing flow and encodes the context to preserve 3D geometric equivariance, and Pocket2Mol (Peng et al., 2022) further adds bond generation for more realistic molecular structure. For fragment-based methods (Powers et al., 2022; Zhang & Liu, 2023; Zhang et al., 2023), molecules are decomposed into chemically meaningful motifs rather than seperated atom point cloud, and generated via motif assembling. (2) Diffusion-based methods have recently been proposed, aiming to overcome the problem of sampling efficiency and unnatural ordering brought by autoregressive fashion (Schneuing et al., 2022; Guan et al., 2022; 2023). But these methods still suffer from false positive problems. 7. Conclusion In this paper, we first investigate the challenges of current generative models in SBDD, i.e., distorted structures and sub-optimal binding modes. Based on the observations concerning mode collapse and hybrid space, we propose MolCRAFT, an SE-(3) equivariant generative model operating in the continuous parameter space with a noise reduced sampling strategy, which yields higher quality molecules. 8 MolCRAFT: Structure-Based Drug Design in Continuous Parameter Space Broader Impact This paper is aimed to facilitate in-silico rational drug design. Potential society consequences include mal-intended usage of toxic compound discovery, which needs support from professional wet labs and thus expensive to reach. Therefore we do not possess a negative vision that this might lead to serious ethical consequences, though we are aware of such a possibility." + }, + { + "url": "http://arxiv.org/abs/2404.15677v2", + "title": "CharacterFactory: Sampling Consistent Characters with GANs for Diffusion Models", + "abstract": "Recent advances in text-to-image models have opened new frontiers in\nhuman-centric generation. However, these models cannot be directly employed to\ngenerate images with consistent newly coined identities. In this work, we\npropose CharacterFactory, a framework that allows sampling new characters with\nconsistent identities in the latent space of GANs for diffusion models. More\nspecifically, we consider the word embeddings of celeb names as ground truths\nfor the identity-consistent generation task and train a GAN model to learn the\nmapping from a latent space to the celeb embedding space. In addition, we\ndesign a context-consistent loss to ensure that the generated identity\nembeddings can produce identity-consistent images in various contexts.\nRemarkably, the whole model only takes 10 minutes for training, and can sample\ninfinite characters end-to-end during inference. Extensive experiments\ndemonstrate excellent performance of the proposed CharacterFactory on character\ncreation in terms of identity consistency and editability. Furthermore, the\ngenerated characters can be seamlessly combined with the off-the-shelf\nimage/video/3D diffusion models. We believe that the proposed CharacterFactory\nis an important step for identity-consistent character generation. Project page\nis available at: https://qinghew.github.io/CharacterFactory/.", + "authors": "Qinghe Wang, Baolu Li, Xiaomin Li, Bing Cao, Liqian Ma, Huchuan Lu, Xu Jia", + "published": "2024-04-24", + "updated": "2024-04-27", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "In the evolving realm of text-to-image generation, diffusion models have emerged as indispensable tools for content creation [5, 26, 44]. However, the inherent stochastic nature of the generation models leads to the inability to generate consistent subjects in different contexts directly, as shown in Figure 1. Such consistency can derive many applications: illustrating books and stories, creating brand ambassador, movie making, developing presentations, art design, identity-consistent data construction and more. Subject-driven methods work by either representing a user- specific image as a new word [6, 18, 35] or learning image fea- ture injection [34, 38, 42] for consistent image generation. Their training paradigms typically include per-subject optimization and encoder pretraining on large-scale datasets. The former usually requires lengthy optimization for each subject and tends to overfit the appearance in the input image [11, 27]. The latter consumes significant computational costs and struggles in stably capturing the identity and its details [18, 34]. However, these methods attempt to produce images with the same identity as the reference images, instead of creating a new character in various contexts. A feasible way is that a text-to-image model is used in advance to create a new character\u2019s image and then subject-driven methods are adopted to produce images with consistent identity. Such a two-stage work- flow could push the pretrained generation model away from its training distribution, leading to degraded generation quality and poor compatibility with other extension models. Therefore, there is a pressing need to propose a new end-to-end framework that enables consistent character generation. Here we are particularly interested in consistent image gener- ation for human. Since text-to-image models are pretrained on large-scale image-text data, which contains massive text prompts with celeb names, the models can generate identity-consistent im- ages using celeb names. These names are ideal examples for this task. Previous work [35] has revealed that the word embeddings of celeb names constitute a human-centric prior space with editability, so we decide to conduct new character sampling in this space. In this work, we propose CharacterFactory, a framework for new character creation which mainly consists of an Identity-Embedding GAN (IDE-GAN) and a context-consistent loss. Specifically, a GAN model composed of MLPs is used to map from a latent space to the celeb embedding space following the adversarial learning manner, with word embeddings of celeb names as real data and generated ones as fake. Furthermore, to enable the generated embeddings to work like the native word embeddings of CLIP [24], we constrain these embeddings to exhibit consistency when combined with di- verse contexts. Following this paradigm, the generated embeddings could be naturally inserted into CLIP text encoder, hence could be seamlessly integrated with the image/video/3D diffusion models. In addition, since IDE-GAN is composed of only MLPs as trainable parameters and accesses only the pretrained CLIP during training, it takes only 10 minutes to train and then infinite new identity em- beddings could be sampled to produce identity-consistent images for new characters during inference. The main contributions of this work are summarized as follows: 1) We for the first time propose an end-to-end identity-consistent generation framework named CharacterFactory, which is empow- ered by a vector-wise GAN model in CLIP embedding space. 2) We design a context-consistent loss to ensure that the generated pseudo identity embeddings can manifest contextual consistency. This plug-and-play regularization can contribute to other related tasks. 3) Extensive experiments demonstrate superior identity con- sistency and editability of our method. In addition, we show the satisfactory interpolation property and strong generalization ability with the off-the-shelf image/video/3D modules.", + "main_content": "Recent advances in diffusion models [13, 31] have shown unprecedented capabilities for text-to-image generation [21, 25, 26], and new possibilities are still emerging [4, 36]. The amazing generation performance is derived from the high-quality large-scale image-text pairs [29, 30], flourishing foundational models [5, 23], and stronger controllability design [45, 46]. Their fundamental principles are based on Denoising Diffusion Probabilistic Models (DDPMs) [13], which include a forward noising process and a reverse denoising process. The forward process adds Gaussian noise progressively to an input image, and the reverse process is modeled with a UNet trained for predicting noise. Supervised by the denoising loss, a random noise can be denoised to a realistic image by iterating the reverse diffusion process. However, due to the stochastic nature of this generation process, existing text-to-image diffusion models are not able to directly implement consistent character generation. 2.2 Consistent Character Generation Existing works on consistent character generation mainly focus on personalization for the target subject [11, 27]. Textual Inversion [11] represents the target subject as a new word embedding via optimization while freezing the diffusion model. DreamBooth [27] finetunes all weights of the diffusion model to fit only the target subject. IP-Adapter [42] designs a decoupled cross-attention mechanism for text features and image features. Celeb-Basis [43] and StableIdentity [35] use prior information from celeb names to make optimization easier and improve editability. PhotoMaker trains MLPs and the LoRA residuals of the attention layers to inject identity information [18]. But these methods attempt to produce identity-consistent images based on the reference images, instead of creating a new character. In addition, The Chosen One [1] clusters the generated images to obtain similar outputs for learning a customized model on a highly similar cluster by iterative optimization with personalized LoRA weights and word embeddings, MLPs(D) MLPs(G) Tom Cruise Will Smith Taylor Swift Angelina Jolie \u2026 Celeb Space Tokenizer & Embedding Layer Sample \ud835\udc631 \u2217 \ud835\udc632 \u2217 Real Embeddings \ud835\udc67\u2208\ud835\udc41(0, \ud835\udc3c) Fake Embeddings \ud835\udcdb\ud835\udc82\ud835\udc85\ud835\udc97 a photo of \ud835\udc601 \u2217 \ud835\udc602 \u2217 \ud835\udc601 \u2217 \ud835\udc602 \u2217 is playing the guitar In a room, \ud835\udc601 \u2217 \ud835\udc602 \u2217 opens a gift \u00b7\u00b7\u00b7 Prompt Template Tokenizer & Embedding Layer Text Transformer CLIP a photo of \u04a7 \ud835\udc631 \u2217 \u04a7 \ud835\udc632 \u2217 \u04a7 \ud835\udc631 \u2217 \u04a7 \ud835\udc632 \u2217 is playing the guitar In a room , \u04a7 \ud835\udc631 \u2217 \u04a7 \ud835\udc632 \u2217 opens \u00b7\u00b7\u00b7 \u00b7\u00b7\u00b7 \u00b7\u00b7\u00b7 \u00b7\u00b7\u00b7 \ud835\udcdb\ud835\udc84\ud835\udc90\ud835\udc8f MLPs(G) \ud835\udc67\u2208\ud835\udc41(0, \ud835\udc3c) a photo of \ud835\udc631 \u2217 \ud835\udc632 \u2217 \u00b7\u00b7\u00b7 Tokenizer & Embedding Layer Prompts Text Transformer UNet Insert Insert \u00b7\u00b7\u00b7 (a) Training (b) Inference \u00b7\u00b7\u00b7 Contextual Embeddings Noise Add AdaIN Figure 2: Overview of the proposed CharacterFactory. (a) We take the word embeddings of celeb names as ground truths for identity-consistent generation and train a GAN model constructed by MLPs to learn the mapping from \ud835\udc67to celeb embedding space. In addition, a context-consistent loss is designed to ensure that the generated pseudo identity can exhibit consistency in various contexts. \ud835\udc60\u2217 1, \ud835\udc60\u2217 2 are placeholders for \ud835\udc63\u2217 1, \ud835\udc63\u2217 2. (b) Without diffusion models involved in training, IDE-GAN can end-to-end generate embeddings that can be seamlessly inserted into diffusion models to achieve identity-consistent generation. which is a time-consuming process. ConsiStory [32] introduces a shared attention block mechanism and correspondence-based feature injection between a batch of images, but relying only on patch features lacks semantic understanding for the subject and makes the inference process complicated. Despite creating new characters, they still suffer from complicated pipelines and poor editability. 2.3 Integrating Diffusion Models and GANs Generative Adversarial Net (GAN) [12, 16] models the mapping between data distributions by adversarially training a generator and a discriminator. Although GAN-based methods have been outperformed by powerful diffusion models for image generation, they perform well on small-scale datasets [8] benefiting from the flexibility of GANs. Some methods focus on combining them to improve the optimization objective for diffusion models with GANs [37, 40, 41]. In this work, we for the first time construct a GAN model in CLIP embedding space to sample consistent identity for diffusion models. 3 METHOD To enable the text-to-image models to directly generate images with the same identity, we present a new end-to-end framework, named CharacterFactory, which produces pseudo identity embeddings that can be inserted into any contexts to achieve identity-consistent character generation, as shown in Figure 2. In this section, the background of Stable Diffusion is first briefly introduced in Section 3.1. Later, the technical details of the proposed CharacterFactory are elaborated in Section 3.2 and 3.3. Finally, our full objective is demonstrated in Section 3.4. 3.1 Preliminary In this work, we employ the pretrained Stable Diffusion [26] (denoted as SD) as the base text-to-image model. SD consists of three components: a CLIP text encoder \ud835\udc52\ud835\udc61\ud835\udc52\ud835\udc65\ud835\udc61[24], a Variational Autoencoder (VAE) (E, D) [9] and a denoising U-Net \ud835\udf16\ud835\udf03. With the text conditioning, \ud835\udf16\ud835\udf03can denoise sampled Gaussian noises to realistic images conforming to the given text prompts \ud835\udc5d. In particular, the tokenizer of \ud835\udc52\ud835\udc61\ud835\udc52\ud835\udc65\ud835\udc61sequentially divides and encodes \ud835\udc5dinto \ud835\udc59integer tokens. Subsequently, by looking up the tokenizer\u2019s dictionary, the embedding layer of \ud835\udc52\ud835\udc61\ud835\udc52\ud835\udc65\ud835\udc61retrieves a group of corresponding word embeddings \ud835\udc54= [\ud835\udc631, ..., \ud835\udc63\ud835\udc59], \ud835\udc63\ud835\udc56\u2208R\ud835\udc51. Then, the text transformer \ud835\udf0f\ud835\udc61\ud835\udc52\ud835\udc65\ud835\udc61of \ud835\udc52\ud835\udc61\ud835\udc52\ud835\udc65\ud835\udc61further represents \ud835\udc54to contextual embeddings \u00af \ud835\udc54= [\u00af \ud835\udc631, ..., \u00af \ud835\udc63\ud835\udc59], \u00af \ud835\udc63\ud835\udc56\u2208R\ud835\udc51with the cross-attention mechanism. And \ud835\udf16\ud835\udf03renders the content conveyed in text prompts by cross attention between \u00af \ud835\udc54and diffusion features. 3.2 IDE-GAN Since Stable Diffusion is trained with numerous celeb photos and corresponding captions with celeb names, these names can be inserted into various contexts to generate identity-aligned images. We believe that the word embeddings of these celeb names can be considered as ground truths for identity-consistent generation. Therefore, we train an Identity-Embedding GAN (IDE-GAN) model to learn a mapping from a latent space to the celeb embedding space, \ud835\udc3a: \ud835\udc67\u2192\ud835\udc63, with the expectation that it can generate pseudo identity embeddings that master the identity-consistent editability, like celeb embeddings. Specifically, we employ 326 celeb names [35] which consist only of first name and last name, and encode them into the corresponding word embeddings \ud835\udc36\u2208R326\u00d72\u00d7\ud835\udc51for training. In addition, we CharacterFactory (\u2112\ud835\udc4e\ud835\udc51\ud835\udc63+ \u2112\ud835\udc50\ud835\udc5c\ud835\udc5b) Only \u2112\ud835\udc50\ud835\udc5c\ud835\udc5b Only \u2112\ud835\udc4e\ud835\udc51\ud835\udc63 is smiling with red hair Varying random \ud835\udc67 Figure 3: Effect of L\ud835\udc4e\ud835\udc51\ud835\udc63and L\ud835\udc50\ud835\udc5c\ud835\udc5b. The images in each column are generated by a randomly sampled \ud835\udc67and two prompts according to the pipeline in Figure 2(b). The placeholders \ud835\udc60\u2217 1, \ud835\udc60\u2217 2 of prompts such as \u201c\ud835\udc60\u2217 1 \ud835\udc60\u2217 2 is smiling\u201d are omitted in this work for brevity (Zoom in for the best view). observe that adding a small noise to the celeb embeddings can still generate images with corresponding identity. Therefore, we empirically introduce random noise \ud835\udf02\u223cN (0, I) scaled by 5\ud835\udc52\u22123 as a data augmentation. As shown in Figure 2(a), given a latent code \ud835\udc67\u2208N (0, I), the generator \ud835\udc3ais trained to produce embeddings [\ud835\udc63\u2217 1, \ud835\udc63\u2217 2] that cannot be distinguished from \u201creal\u201d (i.e., celeb embeddings) by an adversarially trained discriminator \ud835\udc37. To alleviate the training difficulty of \ud835\udc3a, we use AdaIN to help the MLPs\u2019 output embeddings [\ud835\udc63\u2032 1, \ud835\udc63\u2032 2] land more naturally into the celeb embedding space [35]: \ud835\udc63\u2217 \ud835\udc56= \ud835\udf0e(\ud835\udc36\ud835\udc56)( \ud835\udc63\u2032 \ud835\udc56\u2212\ud835\udf07(\ud835\udc63\u2032 \ud835\udc56) \ud835\udf0e(\ud835\udc63\u2032 \ud835\udc56) ) + \ud835\udf07(\ud835\udc36\ud835\udc56), \ud835\udc53\ud835\udc5c\ud835\udc5f\ud835\udc56= 1, 2 (1) where \ud835\udf07(\ud835\udc63\u2032 \ud835\udc56), \ud835\udf0e(\ud835\udc63\u2032 \ud835\udc56) are scalars. \ud835\udf07(\ud835\udc36\ud835\udc56) \u2208R\ud835\udc51, \ud835\udf0e(\ud835\udc36\ud835\udc56) \u2208R\ud835\udc51are vectors, because each dimension of \ud835\udc36\ud835\udc56has a different distribution. And \ud835\udc37is trained to detect the generated embeddings as \u201cfake\u201d. This adversarial training is supervised by: L\ud835\udc4e\ud835\udc51\ud835\udc63= E[\ud835\udc631,\ud835\udc632]\u223c\ud835\udc36[log \ud835\udc37([\ud835\udc631, \ud835\udc632] +\ud835\udf02)] +E[log(1\u2212\ud835\udc37(\ud835\udc3a(\ud835\udc67)))], (2) where \ud835\udc3atries to minimize this objective and \ud835\udc37tries to maximize it. As shown in the column 1 of Figure 3, [\ud835\udc63\u2217 1, \ud835\udc63\u2217 2] generated by \ud835\udc67can be inserted into different contextual prompts to produce human images while conforming to the given text descriptions. It indicates that [\ud835\udc63\u2217 1, \ud835\udc63\u2217 2] have obtained editability and enough information for human character generation, and flexibility to work with other words for editing, but the setting of \u201cOnly L\ud835\udc4e\ud835\udc51\ud835\udc63\u201d can not guarantee identity consistency in various contexts. 3.3 Context-Consistent Loss To enable the generated embeddings [\ud835\udc63\u2217 1, \ud835\udc63\u2217 2] to be naturally inserted into the pretrained Stable Diffusion, they are encouraged to work as similarly as possible to normal word embeddings. CLIP, which is trained to align images and texts, could map the word corresponding to a certain subject in various contexts to similar representations. Hence, we design the context-consistent loss to encourage the generated word embeddings to own the same property. Specifically, we sample 1,000 text prompts with ChatGPT [22] for various contexts (covering expressions, decorations, actions, attributes, and backgrounds), like \u201cUnder the tree,\ud835\udc60\u2217 1 \ud835\udc60\u2217 2 has a picnic\u201d, and demand that the position of \u201c\ud835\udc60\u2217 1 \ud835\udc60\u2217 2\u201d in the context should be as diverse as possible. During training, we sample \ud835\udc41prompts from the collected prompt set, and use the tokenizer and embedding layer to encode them into \ud835\udc41groups of word embeddings. The generated embeddings [\ud835\udc63\u2217 1, \ud835\udc63\u2217 2] are inserted at the position of \u201c\ud835\udc60\u2217 1 \ud835\udc60\u2217 2\u201d. Then, the text transformer \ud835\udf0f\ud835\udc61\ud835\udc52\ud835\udc65\ud835\udc61further represents them to \ud835\udc41groups of contextual embeddings, where we expect to minimize the average pairwise distances among the {[\u00af \ud835\udc63\u2217 1, \u00af \ud835\udc63\u2217 2]\ud835\udc56}\ud835\udc41 \ud835\udc56=1: L\ud835\udc50\ud835\udc5c\ud835\udc5b= 1 \u0000\ud835\udc41 2 \u0001 \ud835\udc41\u22121 \u2211\ufe01 \ud835\udc57=1 \ud835\udc41 \u2211\ufe01 \ud835\udc58=\ud835\udc57+1 \u2225[\u00af \ud835\udc63\u2217 1, \u00af \ud835\udc63\u2217 2] \ud835\udc57\u2212[\u00af \ud835\udc63\u2217 1, \u00af \ud835\udc63\u2217 2]\ud835\udc58\u22252 2, (3) where \ud835\udc41is 8 as default. In this way, the pseudo word embeddings [\ud835\udc63\u2217 1, \ud835\udc63\u2217 2] generated by IDE-GAN can exhibit consistency in various contexts. A naive idea is to train MLPs with only L\ud835\udc50\ud835\udc5c\ud835\udc5b, which shows promising consistency as shown in the column 2, 3 of Figure 3. However, L\ud835\udc50\ud835\udc5c\ud835\udc5bonly focuses on consistency instead of diversity, mode collapse occurs in spite of different \ud835\udc67. When L\ud835\udc50\ud835\udc5c\ud835\udc5band L\ud835\udc4e\ud835\udc51\ud835\udc63work together, the proposed CharacterFactory can sample diverse context-consistent identities as shown in the column 4, 5 of Figure 3. Notably, this regularization loss is plug-and-play and can contribute to other subject-driven generation methods to learn context-consistent subject word embeddings. 3.4 Full Objective Our full objective can be expressed as: \ud835\udc3a\u2217= arg min \ud835\udc3amax \ud835\udc37\ud835\udf061L\ud835\udc4e\ud835\udc51\ud835\udc63(\ud835\udc3a, \ud835\udc37) + \ud835\udf062L\ud835\udc50\ud835\udc5c\ud835\udc5b(\ud835\udc3a,\ud835\udf0f\ud835\udc61\ud835\udc52\ud835\udc65\ud835\udc61), (4) where \ud835\udf061 and \ud835\udf062 are trade-off parameters. The discriminator \ud835\udc37\u2019s job remains unchanged, and the generator \ud835\udc3ais tasked not only to learn the properties of celeb embeddings to deceive the \ud835\udc37, but also to manifest contextual consistency in the output space of the text transformer \ud835\udf0f\ud835\udc61\ud835\udc52\ud835\udc65\ud835\udc61. Here, we emphasize two noteworthy points: \u2022 GAN for word embedding. We introduce GAN in the CLIP embedding space for the first time and leverage the subsequent network to design the context-consistent loss which can perceive the generated pseudo identity embeddings in diverse contexts. This design is similar to the thought of previous works for image generation [2, 15, 47], which have demonstrated that mixing the GAN objective and a more traditional loss such as L2 distance is beneficial. \u2022 No need diffusion-based training. Obviously, the denoising UNet and the diffusion loss which are commonly used to train diffusion-based methods, are not involved in our training process. Remarkably, the proposed IDE-GAN can seamlessly integrate with diffusion models to achieve identityconsistent generation for inference as shown in Figure 2(b). 4 EXPERIMENTS 4.1 Experimental Setting Implementation Details. We employ Stable Diffusion v2.1-base as our base model. The number of layers in the MLPs for the generator \ud835\udc3aand the discriminator \ud835\udc37are 2 and 3 respectively. The dimension of \ud835\udc67is set to 64 empirically. The batch size and learning rate are set to 1 and 5\ud835\udc52\u22125. We employ an Adam optimizer [17] with the momentum parameters \ud835\udefd1 = 0.5 and \ud835\udefd2 = 0.999 to optimize our Textual Inversion\u2020 DreamBooth\u2020 IP-Adapter\u2020 Celeb-Basis\u2020 CharacterFactory PhotoMaker\u2020 a photo of wearing headphones a photo of wearing a Christmas hat wearing a spacesuit Figure 4: Qualitative comparisons with two-stage workflows using five baselines (denoted with \u2020) for creating consistent characters. The upper left corner of the two-stage baselines is the generated image by Stable Diffusion as the input of the second stage. Two-stage workflows struggle to maintain the identity of the generated image and degrade the image quality. In comparison, the proposed CharacterFactory can generate high-quality identity-consistent character images with diverse layouts while conforming to the given text prompts (Zoom in for the best view). IDE-GAN. The trade-off parameters \ud835\udf061 and \ud835\udf062 are both 1 as default. CharacterFactory is trained with only 10 minutes for 10,000 steps on a single NVIDIA A100. The classifier-free guidance [14] scale is 8.5 for inference as default. More implementation details can be found in the supplementary material. Baselines. Since the most related methods, The Chosen One [1] and ConsiStory [32] which are also designed for consistent text-toimage generation, have not released their codes yet, we compare these methods with the content provided in their papers. In addition, as we introduced in Section 1, the two-stage workflows with subject-driven methods can also create new characters. Therefore, we first use a prompt \u201ca photo of a person, facing to the camera\u201d to drive Stable Diffusion to generate images of new characters as the input of the second stage, and then use these subject-driven methods to produce character images with diverse prompts for comparison. These input images are used for subject information injection and not involved in the calculation of quantitative comparisons. These methods include the optimization-based methods: Textual Inversion [11], DreamBooth [27], Celeb-Basis [43], and the encoderbased methods: IP-Adapter [42], PhotoMaker [18]. We prioritize to use the official models released by these methods. We use the Stable Diffusion 2.1 versions of Textual Inversion and DreamBooth for fair comparison. Evaluation. The input of our method comes from random noise, so this work does not compare subject preservation for quantitative comparison. To conduct a comprehensive evaluation, we use 40 text prompts that cover decorations, actions, expressions, attributes and Table 1: Quantitative comparisons with two-stage workflows using five baselines (denoted with \u2020). \u2191indicates higher is better, and \u2193indicates that lower is better. The best results are shown in bold. We define the speed as the time it takes to create a new consistent character on a single NVIDIA A100 GPU. Obviously, CharacterFactory obtains superior performance on identity consistency, editability, trusted face diversity, image quality and speed, which are consistent with the qualitative comparisons. Methods Subject Cons.\u2191 Identity Cons.\u2191 Editability\u2191 Face Div.\u2191 Trusted Div.\u2191 Image Quality\u2193 Speed (s)\u2193 Textual Inversion\u2020 [11] 0.647 0.295 0.274 0.392 0.078 47.94 3200 DreamBooth\u2020 [27] 0.681 0.443 0.287 0.339 0.073 62.66 1500 IP-Adapter\u2020 [42] 0.853 0.447 0.227 0.192 0.096 95.25 7 Celeb-Basis\u2020 [43] 0.667 0.369 0.273 0.378 0.101 56.43 480 PhotoMaker\u2020 [18] 0.694 0.451 0.301 0.331 0.138 53.37 10 CharacterFactory 0.764 0.498 0.332 0.333 0.140 22.58 3 drinking a beer giving a talk in a conference a watercolor painting of The Chosen One ConsiStory in a studio in a meadow eating piece of cake CharacterFactory CharacterFactory Figure 5: Qualitative comparisons with the generation results in the papers of two most related methods The Chosen One [1] and ConsiStory [32]. CharacterFactory achieves comparable performance with the same prompts (Zoom in for the best view). backgrounds [18]. Overall, we use 70 identities and 40 text prompts to generate 2,800 images for each competing method. Metrics: We calculate the CLIP visual similarity (CLIP-I) between the generated results of \u201ca photo of \ud835\udc60\u2217 1 \ud835\udc60\u2217 2\u201d and other text prompts to evaluate Subject Consistency. And we calculate face similarity [7] and perceptual similarity (i.e., LPIPS) [48] between the detected face regions with the same settings to measure the Identity Consistency and Face Diversity [18, 39]. But inconsistent faces might obtain high face diversity, leading to unreliable results. Therefore, we also introduce the Trusted Face Diversity [35] which is calculated by the product of cosine distances from face similarity and face diversity between each pair of images, to evaluate whether the generated faces from the same identity are both consistent and diverse. We calculate the text-image similarity (CLIP-T) to measure the Editablity. In addition, we randomly sample 70 celeb names to generate images with the introduced 40 text prompts as pseudo ground truths, and calculate Fr\u00e9chet Inception Distance (FID) [20] between the generated images by competing methods and pseudo ground truths to measure the Image Quality. 4.2 Comparison with Two-Stage Workflows. Qualitative Comparison. As mentioned in Section 4.1, we randomly generate 70 character images in front view to inject identity information for two-stage workflows using subject-driven baselines (denoted with \u2020), as shown in Figure 4. PhotoMaker\u2020 [18] and Celeb-Basis\u2020 [43] are human-centric methods. The former pretrains a face encoder and LoRA residuals on large-scale datasets. The latter optimizes word embeddings to represent the target identity. But they all suffer from degraded image quality under this setting. IP-Adapter\u2020 [42] learns text-image decoupled cross attention, but fails to present \u201cChristmas hat\u201d and \u201cspacesuit\u201d. DreamBooth\u2020 [27] finetunes the whole model to adapt to the input image and tends to generate images similar to the input image. It lacks generation diversity and fails to produce the \u201cChristmas hat\u201d. Due to the stochasticity of Textual Inversion\u2020 [11]\u2019s optimization process, its identity consistency and image quality are relatively weak. Overall, two-stage workflows show decent performance for identity consistency, editability, and image quality, and they all rely on the input images and struggle to preserve the input identity. In contrast, the proposed CharacterFactory can sample pseudo identities end-toend and generate identity-consistent prompt-aligned results with high quality. Quantitative Comparison. In addition, we also provide the quantitative comparison with five baselines in Table 1. Since IP-Adapter\u2020 \ud835\udc671 \ud835\udc672 0.5\ud835\udc671 + 0.5\ud835\udc672 \u22ef \u22ef a photo of \ud835\udc601 \u2217 \ud835\udc602 \u2217 \ud835\udc601 \u2217 \ud835\udc602 \u2217 wearing headphones on a bus a photo of \ud835\udc601 \u2217 \ud835\udc602 \u2217 \ud835\udc601 \u2217 \ud835\udc602 \u2217holding a bottle of wine \ud835\udc671 \ud835\udc672 0.5\ud835\udc671 + 0.5\ud835\udc672 \u22ef \u22ef Figure 6: Interpolation property of IDE-GAN. We conduct linear interpolation between randomly sampled \ud835\udc671 and \ud835\udc672, and generate pseudo identity embeddings with IDE-GAN. To visualize the smooth variations in image space, we insert the generated embeddings into Stable Diffusion via the pipeline of Figure 2(b). The experiments in row 1, 3 are conducted with the same seeds, and row 2, 4 use random seeds (Zoom in for the best view). Table 2: Comparisons with two most related methods on the speed (i.e., time to produce consistent identity) and the forms of identity representation. In contrast, CharacterFactory is faster, and uses a more lightweight and natural form for identity representation, which ensures seamless collaboration with other modules and convenient identity reuse. Speed\u2193(s) Identity Representation The Chosen One [1] 1,200 LoRAs + two word embeddings Consistory [32] 49 Self-attention keys and values of reference images CharacterFactory 3 Two word embeddings tends to generate frontal faces, it obtains better subject consistency (CLIP-I) but weak editability (CLIP-T). CLIP-I mainly measures high-level semantic alignment and lacks the assessment for identity, so we further introduce the identity consistency for evaluation. Our method achieves the best identity consistency, editability and second-place subject consistency. In particular, the proposed context-consistent loss incentivizes pseudo identities to exhibit consistency in various contexts. On the other hand, our effective adversarial learning enables pseudo identity embeddings to work in Stable Diffusion as naturally as celeb embeddings, and thus outperforms PhotoMaker\u2020 (the second place) by 0.031 on editability. Textual Inversion\u2020 and Celeb-Basis\u2020 obtain good face diversity but weak trusted diversity. This is because face diversity measures whether the generated faces from the same identity are diverse in different contexts, but inconsistent identities can also be incorrectly recognized as \u201cdiverse\u201d. Therefore, trusted face diversity is introduced to evaluate whether the results are both consistent and diverse. So Textual Inversion\u2020 obtains the best face diversity, but is inferior to CharacterFactory 0.062 on trusted face diversity. For image quality (FID), the two-stage workflows directly lead to an unacceptable degradation of competing methods on image quality quantitatively. On the other hand, two-stage workflows consume more time for creating identity-consistent characters. In comparison, our end-to-end framework implements more natural generation results, the best image quality and faster inference workflow. 4.3 Comparison with Consistent-T2I Methods In addition, we compare the most related methods The Chosen One [1] and ConsiStory [32] with the content provided in their papers. These two methods are also designed for consistent character generation, but have not released the codes yet. Qualitative Comparison. As shown in Figure 5, The Chosen One uses Textual Inversion+DreamBooth-LoRA to fit the target identity, Table 3: Ablation study with Identity Consistency, Editability, Trusted Face Diversity and a proposed Identity Diversity. In addition, we also provide more parameter analysis in the supplementary material. Identity Cons. Editability Trusted Div. Identity Div. Only L\ud835\udc4e\ud835\udc51\ud835\udc63 0.078 0.299 0.013 0.965 Only L\ud835\udc50\ud835\udc5c\ud835\udc5b 0.198 0.276 0.057 0.741 Ours 0.498 0.332 0.140 0.940 but only achieves consistent face attributes, which fails to obtain better identity consistency. Besides, excessive additional parameters degrade the image quality. ConsiStory elicits consistency by using shared attention blocks to learn the subject patch features within a batch. Despite its consistent results, it lacks controllability and semantic understanding of the input subject due to its dependence on patch features, i.e., it cannot edit with abstract attributes such as age and fat/thin. In comparison, our method achieves comparable performance on identity consistency, and image quality, and even can prompt with abstract attributes as shown in Figure 1, 7. Practicality. As introduced in Section 2.2, The Chosen One searches a consistent character by a lengthy iterative procedure which takes about 1,200 seconds on a single NVIDIA A100 GPU, and needs to save LoRA weights+two word embeddings for each character. ConsiStory is training-free, but its inference pipeline is timeconsuming (takes about 49 seconds to produce an identity-consistent character) and requires saving self-attention keys and values of reference images for each character. In comparison, CharacterFactory is faster and more lightweight, taking only 10 minutes to train IDEGAN for sampling pseudo identity embeddings infinitely, and only takes 3 seconds to create a new character with Stable Diffusion. Besides, using two word embeddings to represent consistent identity is convenient for identity reuse and integration with other modules such as video/3D generation models. 4.4 Ablation Study In addition to the ablation results presented in Figure 3, we also conduct a more comprehensive quantitative analysis in Table 3. To evaluate the diversity of generated identities, we calculate the average pairwise face similarity between 70 generated images with \u201ca photo of \ud835\udc60\u2217 1 \ud835\udc60\u2217 2\u201d, and define (1\u2212the average similarity) as identity diversity (The lower similarity between generated identities represents higher diversity). Note that identity diversity only makes sense when there is satisfactory identity consistency. As mentioned in Section 3.2, Only L\ud835\udc4e\ud835\udc51\ud835\udc63can generate promptaligned human images (0.299 on Editability), but the generated faces from the same latent code \ud835\udc67are different (0.078 on identity consistency). This is because learning the mapping \ud835\udc67\u2192\ud835\udc63with only L\ud835\udc4e\ud835\udc51\ud835\udc63 deceives the discriminator \ud835\udc37, but still struggles to perceive contextual consistency. Only L\ud835\udc50\ud835\udc5c\ud835\udc5bis prone to mode collapse, producing similar identities for different \ud835\udc67, which manifests as weaker identity diversity (0.741). Notably, identity consistency is not significant under this setting. We attribute to the fact that direct L2 loss cannot reach the abstract objective (i.e., identity consistency). When using \u201cThis is the story about Jenny. Jenny lived in a poor family when she was a child. So, she studied hard after going to school. At the age of 25, she found a job as a programmer. Now, she is successful in her career, enjoys coffee, and feels satisfied with her life in New York.\u201d Scene 1 Scene 2 Scene 3 Scene 4 Figure 7: Story Illustration. The proposed CharacterFactory can illustrate a story with the same character. L\ud835\udc4e\ud835\udc51\ud835\udc63and L\ud835\udc50\ud835\udc5c\ud835\udc5btogether, IDE-GAN can generate diverse contextconsistent pseudo identity embeddings, thereby achieving the best quantitative scores overall. 4.5 Interpolation Property of IDE-GAN The interpolation property of GANs is that interpolations between different randomly sampled latent codes in latent space can produce semantically smooth variations in image space [28]. To evaluate whether our IDE-GAN carries this property, we randomly sample \ud835\udc671 and \ud835\udc672, and perform linear interpolation as shown in Figure 6. IDEGAN uses the interpolated latent codes to generate corresponding pseudo identity embeddings, respectively. Since the output space of IDE-GAN is embeddings instead of images, it cannot directly visualize the variations like traditional GANs [16, 28] in image space. So we insert these pseudo identity embeddings into Stable Diffusion to generate the corresponding images via the pipeline in Figure 2(b). As shown in Figure 6, CharacterFactory can produce continuous identity variations with the interpolations between different latent codes. And the interpolated latent codes (e.g., 0.5\ud835\udc671 + 0.5\ud835\udc672) can be chosen for further identity-consistent generation. It demonstrates that our IDE-GAN has satisfactory interpolation property and can be seamlessly integrated with Stable Diffusion. 4.6 Applications As shown in Figure 1, 7, the proposed CharacterFactory can be used directly for various downstream tasks and is capable of broader extensions such as video/3D scenarios. Story Illustration. In Figure 7, a full story can be divided into a set of text prompts for different scenes. CharacterFactory can create a new character to produce identity-consistent story illustrations. Stratified Sampling. The proposed CharacterFactory can create diverse characters, such as different genders and races. Taking the gender as an example, we can categorize celeb names into \u201cMan\u201d and \u201cWoman\u201d to train Man-IDE-GAN and Woman-IDE-GAN separately, each of which can generate only the specified gender. Our generator \ud835\udc3ais constructed with only two-layer MLPs, so that stratified sampling will not introduce excessive storage costs. More details can be found in the supplementary material. Virtual Humans in Image/Video/3D Generation. Currently, virtual human generation mainly includes 2D/3D facial reconstruction, talking-head generation and body/human movements [50], which typically rely on pre-existing images and lack scenario diversity and editability. And CharacterFactory can create new characters end-to-end and conduct identity-consistent virtual human image generation. In addition, since the pretrained Stable Diffusion 2.1 is fixed and the generated pseudo identity embeddings can be inserted into CLIP text transformer naturally, our method can collaborate with the SD-based plug-and-play modules. As shown in Figure 1, we integrate CharacterFactory with ControlNet-OpenPose [3, 46], ModelScopeT2V [33] and LucidDreamer [19] to implement identityconsistent virtual human image/video/3D generation. Identity-Consistent Dateset Construction. Some human-centric subject-driven generation methods [6, 18] construct large-scale celeb datasets for training. PhotoMaker [18] crawls celeb photos from the Internet and DreamIdentity [6] uses text prompts containing celeb names to drive Stable Diffusion to generate celeb images. Their constructed data includes only celebs, leading to a limited number of identities. Notably, the proposed CharacterFactory can use diverse text prompts to generate identity-consistent images infinitely for dataset construction. Furthermore, collaboration with the mentioned SD-based plug-and-play modules can construct identity-consistent video/3D datasets. 5 CONCLUSION In this work, we propose CharacterFactory, to unlock the end-toend identity-consistent generation ability for diffusion models. It consists of an Identity-Embedding GAN (IDE-GAN) for learning the mapping from a latent space to the celeb embedding space and a context-consistent loss for identity consistency. It takes only 10 minutes for training and 3 seconds for end-to-end inference. Extensive quantitative and qualitative experiments demonstrate the superiority of CharacterFactory. Besides, we also present that our method can empower many interesting applications." + }, + { + "url": "http://arxiv.org/abs/2404.06139v1", + "title": "DiffHarmony: Latent Diffusion Model Meets Image Harmonization", + "abstract": "Image harmonization, which involves adjusting the foreground of a composite\nimage to attain a unified visual consistency with the background, can be\nconceptualized as an image-to-image translation task. Diffusion models have\nrecently promoted the rapid development of image-to-image translation tasks .\nHowever, training diffusion models from scratch is computationally intensive.\nFine-tuning pre-trained latent diffusion models entails dealing with the\nreconstruction error induced by the image compression autoencoder, making it\nunsuitable for image generation tasks that involve pixel-level evaluation\nmetrics. To deal with these issues, in this paper, we first adapt a pre-trained\nlatent diffusion model to the image harmonization task to generate the\nharmonious but potentially blurry initial images. Then we implement two\nstrategies: utilizing higher-resolution images during inference and\nincorporating an additional refinement stage, to further enhance the clarity of\nthe initially harmonized images. Extensive experiments on iHarmony4 datasets\ndemonstrate the superiority of our proposed method. The code and model will be\nmade publicly available at https://github.com/nicecv/DiffHarmony .", + "authors": "Pengfei Zhou, Fangxiang Feng, Xiaojie Wang", + "published": "2024-04-09", + "updated": "2024-04-09", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Image composition faces a notable hurdle in achieving a realistic output, as the foreground and background elements may exhibit substantial differences in appearance due to various factors such as brightness and contrast. To address this challenge, image harmo- nization techniques can be employed to ensure visual consistency. In essence, image harmonization entails refining the appearance of the foreground region to align seamlessly with the background. The rapid advancements in deep learning approaches [1\u201312] have con- tributed significantly to the progress of the image harmonization task. The input for the image harmonization task consists of a com- posite image and a foreground mask used to distinguish between the foreground and background, with the output being a harmo- nized image. In other words, both the input and output of the image harmonization task are in image format. Therefore, it can \u2217Corresponding Author. be viewed as an image-to-image translation task. Recently, diffu- sion models [13\u201315] have significantly advanced the progress of image-to-image translation tasks. For instance, Chitwan et al. [16] proposed Palette, which is a conditional diffusion model that es- tablishes a new SoTA on four image-to-image translation tasks, namely colorization, inpainting, uncropping, and JPEG restoration. Hshmat et al. [17] proposed SR3+, which is a diffusion-based model that achieves SoTA results on blind super-resolution task. Directly applying the above diffusion models to the image har- monization task faces the significant challenge of enormous com- putational resource consumption due to training from scratch. For instance, Palette is trained with a batch size of 1024 for 1M steps and SR3+ is trained with a batch size of either 256 or 512 for 1.5M steps. To address this issue, a straightforward approach is to construct an image harmonization model based on an off-the-shelf latent diffusion model [18]. Since the images generated by latent diffusion trained on large-scale datasets are mostly harmonious, the image harmonization model built on top of it can converge quickly. However, applying a pre-trained latent diffusion model to image harmonization task also faces a significant challenge, which is the reconstruction error caused by the image compression autoencoder. The latent diffusion model takes as its input a feature map of an image that has undergone KL-reg VAE encoding (compressing) process, resulting in a reduced resolution of 1/8 relative to the original image. In other words, if a 256px resolution image and mask are inputted into the latent diffusion model, it will process a feature map and mask with resolution of only 32px. This makes it difficult for the model to reconstruct the content of the image, especially in the case of faces, even if it can generate harmonious images. Jiajie et al. [19] tried to build an image harmonization model on the pre-trained Stable Diffusion model, but did not consider this issue, they could only obtain results that were significantly worse than SOTA. To address this issue, in this paper, we construct an image har- monization model called DiffHarmony based on a pre-trained latent diffusion model. DiffHarmony tends to generate harmonious but potentially blurry initial images. Therefore, we propose two sim- ple but effective strategies to enhance the clarity of the initially harmonized images. One is to resize the input image to higher reso- lution to generate images with a higher resolution during inference. The second is to introduce an additional refinement stage that uti- lizes a simple UNet-structured model to further alleviate the image distortion. Overall, the main contribution of this work is twofold. First, a method is proposed to enable the pre-trained latent diffusion models to achieve SOTA results on the image harmonization task. Secondly, a wealth of experiments are designed to analyze the arXiv:2404.06139v1 [cs.CV] 9 Apr 2024 , , Pengfei Zhou, Fangxiang Feng, and Xiaojie Wang advantages and disadvantages of applying the pre-trained latent diffusion models to the image harmonization task, providing a basis for future improvements.", + "main_content": "In this section, we first present the process of modifying a pretrained latent diffusion model, i.e. Stable Diffusion, to do image harmonization task. Then, we elucidate the techniques to mitigate image distortion issue. The overall architecture of our method is displayed as Figure 1. Figure 1: Architecture of our method. In the harmonization stage involving DiffHarmony, composite image \ud835\udc3c\ud835\udc50and foreground mask \ud835\udc40are concatenated as image condition after encoded through VAE and downsample respectively. The diffusion model performs inference, and the output is mapped back to image space through VAE decoder, resulting \u02dc \ud835\udc3c\u210e. In the refinement stage, we scale down \u02dc \ud835\udc3c\u210e, \ud835\udc3c\ud835\udc50, \ud835\udc40and concatenate them together as input to refinement model. After adding refinement model output to downscaled \u02dc \ud835\udc3c\u210e, final refined image, \ud835\udc3c\u210eis obtained. 2.1 DiffHarmony: Adapting Stable Diffusion In typical image harmonization task setup, one needs to input a composite image \ud835\udc3c\ud835\udc50along with its corresponding foreground mask \ud835\udc40. Model output is harmonized image \ud835\udc3c\u210e. Due to this workflow, image harmonization can be categorized as conditional image generation task, thus we can try to utilize pretrained image generation model. Stable Diffusion is the most suitable choice as it\u2019s open source, pretrained on a large amount of diverse data, and already capable of generating images with reasonable content and lighting. However we need to do two adaptations : 1) add additional input \ud835\udc3c\ud835\udc50and \ud835\udc40to Stable Diffusion model ; 2) use null text input (cause text information is not available in traditional harmonization task). 2.1.1 Inpainting Variation. Referring to previous image conditioned diffusion models[20\u201322], we can extend dimension of the input channel by concatenating image conditions and noisy image input. In image harmonization, the conditions are \ud835\udc3c\ud835\udc50and \ud835\udc40. Stable Diffusion inpainting suits our needs. It incorporates additional input channels for masks and masked images and is specifically fine-tuned to do image inpainting task and, same as image harmonization, it generates new foreground content while keeping background part unchanged. 2.1.2 Null Text Input. In the actual generation process, Stable Diffusion typically employs Classifier-Free Guidance (CFG)[23] technique. To perform CFG during inference one needs to train both an unconditional denoising diffusion model \ud835\udc5d\ud835\udf03(\ud835\udc67) (parameterized as \ud835\udf16\ud835\udf03(\ud835\udc67)) and a conditional denoising diffusion model \ud835\udc5d\ud835\udf03(\ud835\udc67|\ud835\udc50) (parameterized as \ud835\udf16\ud835\udf03(\ud835\udc67|\ud835\udc50)). In practice, we use a single neural network to incorporate both. For the unconditional part, we can simply input an empty token \u2205, i.e., \ud835\udf16\ud835\udf03(\ud835\udc67) = \ud835\udf16\ud835\udf03(\ud835\udc67,\ud835\udc50= \u2205). During inference, we use the formula \u02dc \ud835\udf16\ud835\udf03(\ud835\udc67,\ud835\udc50) = (1 +\ud835\udc64) \u00b7 \ud835\udf16\ud835\udf03(\ud835\udc67,\ud835\udc50) \u2212\ud835\udc64\u00b7 \ud835\udf16\ud835\udf03(\ud835\udc67) to obtain noise estimations for each step. In image harmonization task, we utilize the unconditional part of Stable Diffusion by inputting only the image conditions while leaving the text empty. 2.2 Alleviate Image Distortion Stable Diffusion uses its VAE encoder to compress image to a lowerresolution upon which the diffusion part does training and inference. The denoised output is mapped back to image space through VAE decoder. When the image resolution is too low, severe image distortion occurs. It can lead to visibly altered object shapes or fluctuations in surface textures. Since image harmonization tasks typically use pixel-level evaluation metrics (e.g., mean squared error), these artifacts can significantly impact the model\u2019s overall performance. 2.2.1 Harmonization At Higher Resolution. We propose using higher-resolution image inputs for DiffHarmony. In previous work models are typically trained and evaluated at resolution of 256px, but we notice that the image distortion problem becomes excessively severe, which limits the upper bound of image generation quality. Besides, performing inference with Stable Diffusion at 256px does not yield reasonable outputs since it\u2019s trained exclusively on 512px images. So we perform inference at 512px or higher resolution. To be consistent with other models in evaluation, we subsequently scale them down to 256px. 2.2.2 Add Refinement Stage. To further mitigate the image distortion issue, we introduce an additional refinement stage to enhance the output of DiffHarmony. After harmonization stage, we got \u02dc \ud835\udc3c\u210e. Then, the refinement stage makes \u02dc \ud835\udc3c\u210esmoother and repair its texture. We also input \ud835\udc3c\ud835\udc50and \ud835\udc40together because they provide information of texture and shape in uncorrupted image. All inputs are scale down to 256px and concatenated along channel dimension. We introduce skip connection between input \u02dc \ud835\udc3c\u210eand output \ud835\udc3c\u210e, allowing model to learn the residual instead of outputing refined image directly, which accelerates training convergence. 3 EXPERIMENT 3.1 Experiment Settings 3.1.1 Dataset. We use iHarmony4[4] for training and evaluation. iHarmony4 consists of 73,146 image pairs and comprises four subsets: HAdobe5k, HFlickr, HCOCO, and Hday2night. Each sample is composed of a natural image, a foreground mask, and a composite DiffHarmony: Latent Diffusion Model Meets Image Harmonization , , Dataset Metric Composite DIH[3] S2AM[24] DoveNet[4] BargainNet[25] Intrinsic[26] RainNet[27] iS2AM[7] D-HT[6] SCS-Co[28] HDNet[10] Li[19] \ud835\udc52\ud835\udc61\ud835\udc4e\ud835\udc59. Ours HCOCO PSNR\u2191 33.94 34.69 35.47 35.83 37.03 37.16 37.08 39.16 38.76 39.88 41.04 34.33 41.25 MSE\u2193 69.37 51.85 41.07 36.72 24.84 24.92 29.52 16.48 16.89 13.58 11.60 59.55 9.22 fMSE\u2193 996.59 798.99 542.06 551.01 397.85 416.38 501.17 266.19 299.30 245.54 153.60 HAdobe5k PSNR\u2191 28.16 32.28 33.77 34.34 35.34 35.20 36.22 38.08 36.88 38.29 41.17 33.18 40.29 MSE\u2193 345.54 92.65 63.40 52.32 39.94 43.02 43.35 21.88 38.53 21.01 13.58 161.36 17.78 fMSE\u2193 2051.61 593.03 404.62 380.39 279.66 284.21 317.55 173.96 265.11 165.48 107.04 HFlickr PSNR\u2191 28.32 29.55 30.03 30.21 31.34 31.34 31.64 33.56 33.13 34.22 35.81 29.21 36.99 MSE\u2193 264.35 163.38 143.45 133.14 97.32 105.13 110.59 69.97 74.51 55.83 47.39 224.05 29.68 fMSE\u2193 1574.37 1099.13 785.65 827.03 698.40 716.60 688.40 443.65 515.45 393.72 199.59 Hday2night PSNR\u2191 34.01 34.62 34.50 35.27 35.67 35.69 34.83 37.72 37.10 37.83 38.85 34.08 38.35 MSE\u2193 109.65 82.34 76.61 51.95 50.98 55.53 57.40 40.59 53.01 41.75 31.97 122.41 24.94 fMSE\u2193 1409.98 1129.40 989.07 1075.71 835.63 797.04 916.48 590.97 704.42 606.80 502.40 Average PSNR\u2191 31.63 33.41 34.35 34.76 35.88 35.90 36.12 38.19 37.55 38.75 40.46 32.70 40.44 MSE\u2193 172.47 76.77 59.67 52.33 37.82 38.71 40.29 24.44 30.30 21.33 16.55 141.84 14.29 fMSE\u2193 1376.42 773.18 594.67 532.62 405.23 400.29 469.60 264.96 320.78 248.86 151.42 Table 1: Quantitative comparison across four sub-datasets of iHarmony4 and in general. Top two performance are shown in red and blue. \u2191means the higher the better, and \u2193means the lower the better. image. Following [4] , we split the iHarmony4 dataset into training and test sets, containing 65,742 and 7,404 image pairs respectively. 3.1.2 Implementation Detail. We trained our DiffHarmony model based on the publicly available Stable Diffusion inpainting model checkpoint on HuggingFace 1. We use the Adam optimizer with \ud835\udefd1 = 0.9, \ud835\udefd2 = 0.999. We employ exponential moving average (EMA) to save model weights, with a decay rate of 0.9999. We use global batch size 32. We initially train the model for 150,000 steps with a learning rate of 1e-5, then reduce the learning rate to 1e-6 and continue our training for additional 50,000 steps. Data augmentations including random resized crop and random horizontal flip are applied. All images are resized to 512px. During training, we use the same noise schedule as the Stable Diffusion model, but use Euler ancestral discrete scheduler [29] to generate the samples in only 5 steps during inference. Our refinement model is based on the U-Net architecture. \u02dc \ud835\udc3c\u210e are generated at 512px resolution then downscaled to 256px. The harmonization stage can produces diverse results for the same input, which serves as a way of data augmentation during training of the refinement model. 3.1.3 Evaluation. In accordance with [4, 25, 27], we use the Peak Signal-to-Noise Ratio (PSNR), Mean Squared Error (MSE), and Foreground MSE (fMSE) metrics on the RGB channels to evaluate the harmonization results. fMSE only calculates the MSE within the foreground regions, providing a measure of foreground harmonization quality. 3.2 Performance Comparison 3.2.1 Qualitative Results. We conduct detailed analysis of model performance and compare qualitatively with previous competing methods. Our method has achieved better visual consistency compared to other approaches as shown in Figure 2. 3.2.2 Quantitative Results. Table 1 presents the quantitative results. From Table 1, it is evident that our method achieves superior results on most of the sub-datasets. While our method exhibits slightly lower PSNR compared to HDNet, this may be attributed to HDNet using the ground truth background as input during both 1https://huggingface.co/runwayml/stable-diffusion-inpainting training and inference. Our method demonstrates significant performance improvements on more challenging subsets HFlickr and Hday2night, indicating gains from pre-trained models for learning in domains with limited data. Li \ud835\udc52\ud835\udc61\ud835\udc4e\ud835\udc59.[19] also use Stable Diffusion to do image harmonization task, but they employ a ControlNet-based[30] approach. As can be seen from Table 1, our method is far more advantageous. 3.3 Ablation Study inf res refine PSNR\u2191 MSE\u2193 fMSE\u2193 512px \u2718 37.65 26.14 290.66 512px \u2714 39.47 19.59 205.07 1024px \u2718 40.12 15.56 166.19 1024px \u2714 40.44 14.29 151.42 Table 2: Ablation study on using different input resolution and w/wo refinement stage. 3.3.1 Higher Resolution At Inference. Table 2 shows the changes on overall performance when input different resolutions images during harmonization stage. It is obvious that increasing input resolution from 512px to 1024px results in a significant improvement in all metrics, which is reasonable, as higher-resolution input images lead to less information compression. 3.3.2 Refinement Stage. We conduct experiments of inference with and without refinement stage. As shown in Table 2, adding the refinement stage results in an improvement in the overall performance. The benefit of introducing refinement stage is more prominent when the harmonization stage uses lower image resolutions, as the refinement stage and using higher resolution input both aim to address the issue of image distortion, and they complement each other. 3.3.3 Randomness. DiffHarmony in our method is essentially an generative model, but in harmonization task, we usually do not want possible pixel value to vary too much. Therefore, we conduct analysis of randomness. We obtain five groups of results based on five different random seeds, and calculate their mean and std. As shown in Table 3, the model exhibits small variances, indicating that the harmonization results generated by our method are stable. , , Pengfei Zhou, Fangxiang Feng, and Xiaojie Wang Figure 2: Qualitative comparison on samples from the test set of iHarmony4. PSNR\u2191 MSE\u2193 fMSE\u2193 37.66 \u00b1 0.02 25.44 \u00b1 0.31 291.03 \u00b1 2.08 Table 3: Randomness analysis. Although essentially a generative model, our method can produce stable harmonized results. 3.4 Advanced Analysis A noticeable fact is that DiffHarmony uses 512px images during training, while other harmonization models are trained in resolution of 256px. To investigate the impact of this strategy on other models, we select the current state-of-the-art model, HDNet, and train it with 512px images, resulting HDNet512. During test, we use 1024px resolution images as input, then scale harmonization results down to 256px for metric calculation. Our preliminary results show that compared to our method, HDNet512 achieves better PSNR and fMSE but slightly worse MSE. This is counterintuitive. We speculate that our method performs better on samples with larger foreground regions, leading to an overall improvement in MSE. To verify this hypothesis, following HDNet[10], we divide data into three ranges based on the ratio of the foreground region area and the entire image: 0% \u223c5%, 5% \u223c15%, and 15% \u223c100%. We calculate metrics for each range respectively. Our results, as shown in Table 4, reveal that our method is worse than HDNet in the 0% \u223c5% data range but outperforms it in the 15% \u223c100% data range. Once again, we emphasize that this arises from the higher information compression loss. However, Model 0% \u223c5% 5% \u223c15% 15% \u223c100% HDNet512 PSNR: 45.64 PSNR: 39.97 PSNR: 34.59 MSE: 3.16 MSE: 11.33 MSE: 47.19 fMSE: 143.93 fMSE: 129.87 fMSE: 152.01 Ours PSNR: 43.28 PSNR: 39.55 PSNR: 34.80 MSE: 4.46 MSE: 11.90 MSE: 40.47 fMSE: 173.10 fMSE: 126.69 fMSE: 128.45 Table 4: Comparison between HDNet trained with highresolution images and our method. HDNet512 is trained with 512px images, and the inputs are 1024px images during inference. This is exactly the same as the experimental setting of our method. it\u2019s potential that our method can achieve more advanced results with higher image resolution or using better pre-trained diffusion models. 4 CONCLUSION In this paper, we propose a solution to achieve SOTA results on image harmonization task based on the Stable Diffusion model. In order to solve the problem of compression loss caused by the VAE in latent diffusion models, we design two effective strategies: utilizing higher-resolution images during inference and incorporating an additional refinement stage. In addition, detailed experimental analysis shows that compared with the SOTA method, our method shows obvious advantages when the foreground area is DiffHarmony: Latent Diffusion Model Meets Image Harmonization , , large enough. This is a strong evidence that our model\u2019s superior harmonization performance compensates its reconstruction loss, laying a solid foundation for research on image harmonization task using diffusion models." + }, + { + "url": "http://arxiv.org/abs/2404.12333v1", + "title": "Customizing Text-to-Image Diffusion with Camera Viewpoint Control", + "abstract": "Model customization introduces new concepts to existing text-to-image models,\nenabling the generation of the new concept in novel contexts. However, such\nmethods lack accurate camera view control w.r.t the object, and users must\nresort to prompt engineering (e.g., adding \"top-view\") to achieve coarse view\ncontrol. In this work, we introduce a new task -- enabling explicit control of\ncamera viewpoint for model customization. This allows us to modify object\nproperties amongst various background scenes via text prompts, all while\nincorporating the target camera pose as additional control. This new task\npresents significant challenges in merging a 3D representation from the\nmulti-view images of the new concept with a general, 2D text-to-image model. To\nbridge this gap, we propose to condition the 2D diffusion process on rendered,\nview-dependent features of the new object. During training, we jointly adapt\nthe 2D diffusion modules and 3D feature predictions to reconstruct the object's\nappearance and geometry while reducing overfitting to the input multi-view\nimages. Our method outperforms existing image editing and model personalization\nbaselines in preserving the custom object's identity while following the input\ntext prompt and the object's camera pose.", + "authors": "Nupur Kumari, Grace Su, Richard Zhang, Taesung Park, Eli Shechtman, Jun-Yan Zhu", + "published": "2024-04-18", + "updated": "2024-04-18", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Recently, we have witnessed an explosion of works on cus- tomizing text-to-image models [16, 23, 45, 74]. They allow us to quickly acquire visual concepts, such as personal ob- jects and favorite places, and reimagine them with new en- vironments and attributes. For instance, we can customize a model on our Teddy bear and prompt it with \u201cTeddy bear on a bench in the park.\u201d Unfortunately, customization methods lack precise camera pose control, as existing diffusion mod- els are trained purely on 2D images without ground truth camera poses. As a result, users often rely on text prompts such as \u201cfront-facing\u201d or \u201cside-facing\u201d, a tedious and un- wieldy process to control views. What if we wish to synthesize a new object, e.g., the Teddy bear in Figure 1, in a different context while con- trolling its pose? In this work, we introduce a new task: given multi-view images of the object, we customize a text- to-image model while enabling precise control of the new object\u2019s camera pose. During inference, our method offers the flexibility of conditioning the generation process on both a target pose and a text prompt. Neural rendering methods have allowed us to accurately control the 3D viewpoint of an existing scene, given multi- view images [4, 5, 39, 57]. Similarly, we seek to imagine the object from novel viewpoints but in a new scene. However, as pre-trained diffusion models, such as Latent Diffusion models [72], are built upon a purely 2D representation, con- necting the 3D neural representation of the object to the 2D internal features of the diffusion model remains challenging. In this work, we propose a new method, CustomDiffu- sion360, to bridge the gap between 3D neural capture and 2D text-to-image diffusion models by providing additional camera pose control w.r.t. the new custom object in 2D text- to-image models. More concretely, given multi-view images of an object, we learn to predict neural feature fields in the intermediate feature spaces of the diffusion model U-Net. To condition the generation process on a target pose, we render the features of the predicted feature fields and then fuse them with the target pose\u2019s noisy features. This new conditioning module is added to a subset of transformer layers in the pre-trained diffusion model. We only train the parameters of the new feature prediction module to preserve object identity and increase generalization. All parameters of the pre-trained model remain frozen, thus keeping our method computationally and storage efficient. We build our method on Stable Diffusion-XL [64] and show results on various object categories, such as cars, chairs, motorcycles, teddy bears, and toys. We compare our method with image editing, model customization, and NeRF editing methods. Our method maintains high alignment with the target object and poses while adhering to the user-provided text prompt. We show that directly integrating the 3D ob- ject information into the text-to-image model pipeline pro- vides performance gains relative to existing 2D methods. Our method can also be combined with other algorithms [3, 52] for applications like generating objects in different target camera poses while preserving the background, panorama synthesis, or composing multiple concepts. Our code, results, and data are available on our webpage.", + "main_content": "Text-based image synthesis. Large-scale text-to-image models [22, 34, 69, 77, 99] have become ubiquitous with their capabilities of generating photorealistic images from text prompts. This progress has been driven by the availability of large-scale datasets [80] as well as advancements in model architecture and training objectives [19, 35, 36, 63, 79]. Among them, diffusion models [30, 83, 85] have emerged as a powerful family of models that generate images by gradually denoising Gaussian noise. Their learned priors have been found useful in various applications, such as image editing [29, 52, 102] and 3D creation [47, 65]. Image editing. One crucial application enabled by the above models is image editing based on text instructions [61]. For example, SDEdit [52] exploits the denoising nature of diffusion models, guiding generation in later denoising timesteps using edit instructions while preserving the input image layout. Various works aim to improve upon this by embedding the input image into the model\u2019s latent space [38, 55, 60, 85], while some use cross-attention and self-attention mechanism for realistic and targeted edits [10, 13, 25, 29, 62]. Recently, several methods train conditional diffusion models to follow user edit instructions or spatial controls [9, 102]. However, existing methods primarily focus on changing style and appearance, while our work enables both viewpoint and appearance control. Model customization. While pre-trained models are trained to generate common objects, users often wish to synthesize images with concepts from their own lives. This has given rise to the emerging technique of model personalization or customization [23, 45, 74]. These methods aim at embedding a new concept, e.g., pet dog, toy, personal car, person, etc., into the output space of text-to-image models. This enables generating new images of the concept in unseen scenarios using the text prompt, e.g., my car in a field of sunflowers. To achieve this, various works fine-tune a small subset of model parameters [26, 32, 45, 89] and/or optimize text token embeddings [1, 23, 92, 104] on the few images of the new concept with different regularizations [45, 74]. More recently, several encoder-based methods have been proposed that train a model on a vast dataset of concept library [2, 24, 46, 75, 81, 90, 94], enabling faster customization during inference. However, none of the existing works allow controlling the camera pose during inference time. In contrast, given the ease of capturing multi-view images of a new concept, in this work, we ask whether we can augment the capabilities of model customization with additional control of the camera pose. View synthesis. Novel view synthesis aims to render a scene from unseen camera poses, given multi-view images. Recently, the success of volumetric rendering-based approaches like NeRF [54] have led to numerous follow-up works with better quality [4, 5], faster speed [14, 57], and fewer training views [18, 58, 86, 98]. Recent works learn generative models with large-scale multi-view data to learn generalizable representations for novel view synthesis [11, 49, 50, 78, 95, 106]. While our work draws motivation from this line of research, our goal differs we aim to enable 3D control in text-toimage personalization, rather than capturing real scenes. Recently, Cheng et al. [17] and H\u00a8 ollein et al. [31] propose adding camera pose in text-to-image diffusion models, while we focus on model customization. 3D editing. Loosely related to our work, many works have been proposed for inserting and manipulating 3D objects within 2D real photographs, using classic geometry-based approaches [15, 37, 41] or recent generative modeling techniques [53, 96, 103]. Instead of editing a single image, our work aims to \u201cedit\u201d the model weights of a pre-trained 2D diffusion model. Many recent works edit [20, 27] or generate [68, 82, 88] a 3D scene given a text prompt or an image. These methods focus on ensuring the multi-view consistency of the scene. Unlike these, we do not aim to edit a 3D multi-view consistent scene, but instead provide additional camera pose control for the new object when customizing text-to-image models. 3. Method Given multi-view images of an object, we aim to embed the object in the text-to-image diffusion model. We construct our method in order to allow the generation of new variations of the object through text prompts while providing control of the camera pose of the object. Our approach involves finetuning a pre-trained text-to-image diffusion model while conditioning it on a 3D representation of the object learned in the diffusion model\u2019s feature space. In this section, we briefly overview the diffusion model and then explain our method in detail. 3.1. Diffusion Models Diffusion models [30, 83] are a class of generative models that sample images by iterative denoising of a random Gaussian distribution. The training of the diffusion model consists of a forward Markov process, where real data x0 is gradually transformed to random noise xT \u223cN(0, I) by sequentially adding Gaussian perturbations in T timesteps, i.e., xt = \u221a\u03b1tx0 + \u221a1 \u2212\u03b1t\u03f5. The model is trained to learn the backward process, i.e., p\u03b8(x0|c) = Z h p\u03b8(xT ) Y pt \u03b8(xt\u22121|xt, c) i dx1:T , (1) The training objective maximizes the variational lower bound, which can be simplified to a simple reconstruction loss: Ext,t,c,\u03f5\u223cN (0,I)[wt||\u03f5 \u2212\u03f5\u03b8(xt, t, c)||], (2) where c can be any modality to condition the generation process. The model is trained to predict the noise added to create the input noisy image xt. During inference, we gradually denoise a random Gaussian noise for a fixed set of timesteps. Various sampling strategies [35, 51, 85] have been proposed to reduce the number of sampling steps T compared to the usual 1000 timesteps in training. 3.2. Customization with Camera Pose Control Model customization aims to condition the model on a new concept, given N representative images of the concept Y = {yi}N i=1, i.e., to model p(x|Y, c) with text prompt c. In contrast, we aim to additionally condition the model on camera pose, allowing more control in the generation process. Thus, given a set of multi-view images {yi}N i=1 and the corresponding camera poses {\u03c0i}N i=1, we seek to achieve a customized text-to-image model corresponding to the object, i.e., our goal is to learn the conditional distribution p(x|{(yi, \u03c0i)}N i=1, c, \u03d5), where c is text prompt and \u03d5 is the target camera pose. To achieve this, we fine-tune a pre-trained text-to-image diffusion model, which models p(x|c), with the additional conditioning of target pose and reference views. Model architecture. In Figure 2, we show our architecture, with an emphasis on the added pose-conditioned transformer block. To begin, we use the Stable DiffusionXL (SDXL) [64] as the pre-trained text-to-image diffusion model in our work. It is based on the Latent Diffusion Model (LDM) [72], which is trained in an autoencoder [43] latent space. The diffusion model is a U-Net [73] consisting of encoder, middle, and decoder blocks. Each block consists Multiview poses \ud835\udc50 \u201ca V* teddybear on a park bench under trees\u201d T5 Feature NeRF Target pose Pose-conditioned Transformer block Diffusion branch \u210e \u210e \u210e \ud835\udc60 \ud835\udc53 Predicted noise Standard transformer block \ud835\udc67! \ud835\udc67\" \ud835\udc67# \ud835\udc59 \ud835\udc53 \ud835\udc50 xt \ud835\udc53 \ud835\udc50 Multiview customization images Y y2 y1 Fpose Fstandard W1 W2 Wx Wy {\u03c0i} \u03c6 ResNet \u03f5 \ud835\udc54 \ud835\udc60 \ud835\udc54 \ud835\udc60 \ud835\udc54 \ud835\udc50 \ud835\udc50 \ud835\udc53 \ud835\udc50 \ud835\udc60 \ud835\udc54 \ud835\udc53 \ud835\udc50 \ud835\udc60 \ud835\udc54 ResNet \u210e \ud835\udc53 \ud835\udc59 Self-Attention Feedforward MLP Linear Cross-Attention \ud835\udc54 \ud835\udc60 Figure 2. Overview. We propose a model customization method that utilizes N reference images defining the 3D structure of an object Y (we illustrate with 2 views for simplicity). We modify the diffusion model U-Net with pose-conditioned transformer blocks. Our Pose-conditioned transformer block features a FeatureNeRF module, which aggregates features from the individual viewpoints to target viewpoint \u03d5, as shown in detail in Figure 3. The rendered feature Wy is concatenated with the target noisy feature Wx and projected to the original channel dimension. We use the diffusion U-Net itself to extract features of reference images, as shown in the top row. We only fine-tune the new parameters in linear projection layer l and FeatureNerF in Fpose blocks. of a ResNet [28], denoted as h, followed by several transformer layers [91]. Each transformer layer consists of a selfattention layer (denoted as s), followed by a cross-attention (denoted as g) with the text condition, and a feed-forward MLP (denoted as f). Given feature map z, output of an intermediate ResNet layer h in the U-Net, a standard transformer block performs Fstandard(z, c) = f(g(s(z), c)). We modify the transformer layer to incorporate pose conditioning. Pose-conditioned transformer block. To condition the model on the 3D structure of the object, we modify the transformer block to be a pose-conditioned transformer block Fpose(z0, {zi, \u03c0i}, c, \u03d5), where z0 is the feature from the main branch, {zi} are intermediate feature maps corresponding to multi-view images, and {\u03c0i} and \u03d5 are reference and target camera poses. To condition on the multi-view images, we learn a radiance field conditioned on reference view features in a feed-forward manner [98]. We extract features {Wi \u2208RH\u00d7W \u00d7C} from {zi}. We use components of pre-trained U-Net itself, Fstandard, to extract these features and render them into a target pose \u03d5 using a FeatureNeRF function to obtain 2D feature map Wy. Wi = Fstandard(zi, c), Wy = FeatureNeRF({Wi, \u03c0i}, c, \u03d5) (3) FeatureNeRF. Here, we describe the aggregation of individual 2D features Wi with 3D poses \u03c0i into a feature map Wy from pose \u03d5. Rather than learning NeRF in a feature space [40, 97], our focus is on learning 3D features that the 2D diffusion model can use. From target viewpoint \u03d5, for each point p on a target ray with direction d, we project the point on the image plane of given views \u03c0i and denote \u00af V W1 W2 \u03c6 {\ud835\udf0e, \ud835\udc5f\ud835\udc54\ud835\udc4f} Wy render \ud835\udc54 \ud835\udc40\ud835\udc3f\ud835\udc43 \ud835\udc50 \u02c6 V Figure 3. FeatureNeRF block. We predict volumetric features \u00af V for each 3D point in the grid using reference features {Wi} (Eqn. 4). Given this feature, we predict the density \u03c3 and color rgb using a 2-layer MLP and use the predicted density to render \u02c6 V, which has been updated with text cross-attention g. The predicted rgb is only used to calculate reconstruction loss during training. projected locations as \u03c0p i . We then sample from these coordinates on feature map Wi, predict a feature for the 3D point, and aggregate them with function \u03c8: Vi =MLP(Sample(Wi; \u03c0p i ), \u03b3(d), \u03b3(p)), i = 1, ..., N \u00af V = \u03c8(V1, ..., VN), (4) where \u03b3 is the frequency encoding. We use the weighted average [71] as the aggregation function \u03c8, where a linear layer predicts the weights based on Vi, \u03c0i, and target pose \u03d5. For each reference view, d and p are first transformed in the view coordinate space. We then predict the density and color of the point using a linear layer: (\u03c3, C) = MLP( \u00af V). (5) We also update the aggregated feature with text condition c using cross-attention: \u02c6 V = CrossAttn( \u00af V, c), (6) and render this updated intermediate feature using the predicted density \u03c3: Wy(r) = Nf X j=1 Tj(1 \u2212exp(\u2212\u03c3j\u03b4j)) \u02c6 Vj, (7) where r is the target ray, \u02c6 V is the aggregated feature of the point in the ray, \u03c3 is the predicted density of that point, Nf is the number of sampled points along the ray between the near and far plane of the camera, and Tj = exp(\u2212Pj\u22121 k=1 \u03c3k\u03b4k) handles occlusion until that point. Conditioning. To process the main target branch, we extract the intermediate 2D feature map after the self s and crossattention layers g, i.e., Wx = g(s(z0), c). We concatenate Wx with the rendered features Wy and then project back into the original feature dimension using a linear layer. Thus, the output of the modified transformer layer is: Fpose = f(l(Wy \u2295Wx)), (8) where l is a learnable weight matrix, which projects the feature into the space to be processed by the feedforward layer f. We initialize l such that the contribution from Wy is zero at the start of training. Training loss. Our training objective includes learning 3D consistent FeatureNeRF modules, which can contribute to the final goal of reconstructing the target concept in diffusion models output space. Thus, we fine-tune the model using the sum of training losses corresponding to FeatureNeRF and the default diffusion model reconstruction loss: Ldiffusion = X r Mwt||\u03f5 \u2212\u03f5\u03b8(xt, t, c)||, (9) where M is the object mask, with the reconstruction loss being calculated only in the object mask region. The losses corresponding to FeatureNeRF consist of RGB reconstruction loss: Lrgb = X r ||M(r)(Cgt(r) \u2212 Nf X j=1 Tj(1 \u2212exp(\u2212\u03c3j\u03b4j))C)||, (10) and two mask-based losses \u2013 (1) silhouette loss [70] Ls which forces the rendered opacity to be similar to object mask, and (2) background suppression loss [6, 7] Lbg which enforces the density of all background rays to be zero, as we only wish to model the object. Ls = X r ||M(r) \u2212 Nf X j=1 Tj(1 \u2212exp(\u2212\u03c3j\u03b4j))|| Lbg = X r (1 \u2212M(r)) Nf X j=1 ||(1 \u2212exp(\u2212\u03c3j\u03b4j))||, (11) Thus, the final training loss is: L = Ldiffusion + \u03bbrgbLrgb + \u03bbbgLbg + \u03bbsLs, (12) where M is the object mask and \u03bbi are hyperparameters to control the rendering quality of intermediate images vs the final denoised image. We keep \u03bbi fixed across all experiments. We assume access to the object\u2019s mask in the image, which is used to calculate the above losses. The three losses corresponding to FeatureNeRF are averaged across all pose-conditioned transformer layers. Inference. During inference, to balance the text vs. reference view conditions in the final generated image, we combine text and image guidance [9] as shown below: \u02c6 \u03f5\u03b8(xt, I = {yi, \u03c0i}N i=1,c) = \u03f5\u03b8(xt, \u2205, \u2205) + \u03bbI(\u03f5\u03b8(xt, I, \u2205) \u2212\u03f5\u03b8(xt, \u2205, \u2205)) + \u03bbc(\u03f5\u03b8(xt, I, c) \u2212\u03f5\u03b8(xt, I, \u2205)), (13) where \u03bbI is the image guidance scale and \u03bbc is the text guidance scale. Increasing the image guidance scale increases the generated image\u2019s similarity to the reference images. Increasing the text guidance scale increases the generated image\u2019s consistency with the text prompt. Training details. During training, we sample the N views equidistant from each other and use the first as the target viewpoint and the others as references. We modify 12 transformer layers with pose conditioning out of 70 transformer layers in Stable Diffusion-XL. For rendering, we sample 24 points along the ray. The new concept is described as \u201cV\u2217 category\u201d, with V\u2217as a trainable token embedding [23, 45]. Furthermore, to reduce overfitting [74], we use generated images of the same category, such as random car images with ChatGPT-generated captions [12]. These images are randomly sampled 25% of the time during training. We also drop the text prompt with 10% probability to be able to use classifier-free guidance. We provide more implementation details in Appendix C. 4. Experiments Dataset. For our experiments, we select concepts from the Common Objects in 3D (CO3Dv2) dataset [71], commonly used for novel view synthesis, and NAVI [33]. Specifically, we select four categories with three instances from the CO3Dv2 datasetcar, chair, teddy bear, and motorcycle\u2014as Input Text prompt + Pose Ours Custom-Diffusion360 3D Editing ViCA-NeRF Customization Lora + Camera pose A V* motorcycle parked on a city street at night. A red V* chair in a white room. A V* teddybear next to a birthday cake with candles. 2D Image Editing LEDITS++ A V* toy in a grassy field surrounded with wildflowers 2D Image Editing SDEdit-1.5 2D Image Editing InstructPix2Pix A V* car next to a picnic table in a park. Figure 4. Qualitative comparison. Given a particular target pose, we show the qualitative comparison of our method with (1) Image editing methods SDEdit, InstructPix2Pix, and LEDITS++ which edit a NeRF rendered image from the input pose, (2) Vica-NeRF, a 3D editing method that trains a NeRF model for each input prompt, and (3) LoRA + Camera pose, our proposed baseline where we concatenate camera pose information to text embeddings during LoRA fine-tuning. Our method performs on par or better in keeping the target identity and poses while incorporating the new text prompt\u2014e.g., putting a picnic table next to the SUV car (1st column)\u2014and following multiple text conditions\u2014e.g., turning the chair red and placing it in a white room (3rd column). V\u2217token is used only in ours and the LoRA + Camera pose method. Ground truth rendering from the given pose is shown as an inset in the first three rows. We show more sample comparisons in Figure 15 of Appendix. Scene change: A V* rubber duck sitting in a grassy field, surrounded by wildflowers. Scene change: A V* teddybear on a park bench under trees. Color change: A green V* car in a driveway, next to a house. Shape change: A rocking V* chair on a porch. Object insertion: A V* teddybear next to a birthday cake with candles. Color change: A blue V* motorcycle. Sample Target Images Figure 5. Qualitative samples with varying pose. Our method\u2019s results with different text prompts and target poses as conditions. Our method learns the identity of custom objects while allowing the user to control the camera pose and text prompt for generating the object in new contexts, e.g., changing the background scene or object color and shape. In each row, the images were generated with the same seed while changing the camera pose around the object in a turntable manner. Figure 16 in the Appendix shows more such samples. Note that each image in a row is independently generated. We do not aim to generate multi-view consistent scenes. Method Text Alignment Image Alignment Photorealism SDEdit 40.06 \u00b1 2.68% 36.08 \u00b1 2.80% 33.11 \u00b1 2.82% vs. Ours 59.40 \u00b1 2.68% 63.92 \u00b1 2.80% 66.89 \u00b1 3.18% InstructPix2Pix 44.79 \u00b1 2.58% 29.34 \u00b1 2.24 % 27.61 \u00b1 2.63% vs. Ours 55.21 \u00b1 2.58% 70.66 \u00b1 2.24 % 72.39 \u00b1 2.63% LEDITS++ 32.47 \u00b1 2.39% 35.86 \u00b1 2.50% 26.18 \u00b1 2.82% vs. Ours 67.53\u00b1 2.39% 64.14 \u00b1 2.50% 73.82 \u00b1 2.82% Vica-NeRF 27.13 \u00b1 2.83% 24.36 \u00b1 3.35% 12.90 \u00b1 2.67% vs. Ours 72.87 \u00b1 2.83% 75.64 \u00b1 3.35 % 87.10 \u00b1 2.67 % LoRA + Camera pose 32.26 \u00b1 2.67% 66.97 \u00b1 2.50 % 52.51 \u00b1 2.75% vs. Ours 67.64 \u00b1 2.67% 33.03 \u00b1 2.50% 47.49 \u00b1 2.75% Table 1. Human preference evaluation. Our method is preferred over all baselines for text alignment, image alignment to target concept, and photorealism except LoRA + Camera pose, which overfits the training images, as also shown in Figure 4. Method Angular error Camera center error Ours 14.19 0.080 LoRA + Camera pose 41.14 0.305 Table 2. Camera pose accuracy in generated images by ours and the LoRA + Camera pose baseline method. We observe that the baseline usually overfits to training images and does not respect the target pose with new text prompts. each instance is uniquely identifiable for these categories. From the NAVI dataset, we select two unique toy-like concepts. We use the camera pose provided in the dataset for the multi-view images. A representative image of each concept is shown in Figure 14, Appendix A. For each instance, we sample \u223c100 images and use half for training and half for evaluation. The camera poses are normalized such that the mean of camera location is the origin, and the first camera is at unit norm [100]. Baselines. While no prior method targets our exact task, we use three types of related baselines \u2013 (1) 2D image editing methods, which aim to preserve details of the input image and thus keep the object in the same pose as the input image. This includes three recent and publicly available methods: LEDITS++ [8], InstructPix2Pix [9], and SDEdit [52] with Stable Diffusion -1.5 (and SDXL in Appendix A). As image editing methods by themselves do not support camera viewpoint manipulation, we first render a NeRF model [87] in the target pose and then edit the rendered image. (2) Customization-based method, LoRA+Camera pose, where we modify LoRA [32, 76] by concatenating the camera pose information to the text embeddings, following recent work Zero-1-to-3 [49]. (3) VICA-NeRF [20], a 3D editing method that trains a NeRF for each new text prompt. In Appendix C, we provide more details on implementation and hyperparameters for each baseline. Evaluation metrics. To create an evaluation set, we generate 16 prompts per object category using ChatGPT [12]. We instruct ChatGPT to propose four types of prompts: scene change, color change, object composition, and shape change. We then manually inspect them to remove implausible or 0.2 0.3 0.4 0.5 0.6 0.7 0.8 DINO Image Alignment 0.16 0.18 0.20 0.22 0.24 0.26 0.28 CLIP T ext Alignment Category car chair teddybear motorcycle toy Method Ours InstructPix2Pix SD1.5 LEDITS++ SDEdit SD1.5 LoRA + Camera Pose ViCA-NeRF Figure 6. Quantitative comparison. We show CLIP scores (higher is better) vs. DINO-v2 scores (higher is better). We plot the performance of each method on each category and the overall mean and standard error (highlighted). Our method results in higher CLIP text alignment while maintaining visual similarity to target concepts, as indicated by DINO-v2 scores. The text alignment of our method compared to SDEdit and InstructPix2Pix is only marginally better as these methods incorporate the text prompt but at the cost of photorealism, as we show in Table 1. overly complicated text prompts [93]. Table 5 in Appendix B lists all the evaluation prompts. For a quantitative comparison, we primarily use a pairwise human preference study. We compare our method against each baseline to measure image alignment to target concept, alignment to input text prompt, and photorealism of generated images. In total, we collect \u223c1000 responses per pairwise study using Amazon Mechanical Turk. We also show the performance of our method and baselines on other standard metrics like CLIP Score [67] and DINOv2 [59] image similarity [74] to measure the textand image-alignment. To measure whether the object in generated images corresponds to the input camera pose for our method and the LoRA + Camera pose baseline, we use a pretrained model, RayDiffusion [101], to predict the poses from generated images and calculate its error relative to the ground truth camera poses. More details about evaluation are provided in Appendix B. 4.1. Results Generation quality and adherence. First, we measure the quality of the generation \u2013 adherence to the text prompt, the identity preservation to the customized objects, and photorealism \u2013 irrespective of the camera pose. For the comparison, we generate 18 images per prompt on 6 target camera poses, totaling 288 images per concept. Table 1 shows the pairwise human preference for our method vs. baselines. Our method is preferred over all baselines except LoRA + Camera pose, A V* car beside a field of blooming sunflowers. Focal length A V* car beside a field of blooming sunflowers. Scale A V* teddybear dressed as a construction worker, with orange vest, and buildings in background A V* teddybear on a cozy armchair by a fireplace. A V* car car parked by a snowy mountain range. Horizontal translation A V* car car parked by a snowy mountain range. Vertical translation A V* teddybear sitting on the sand at the beach A V* teddybear sitting on the sand at the beach Figure 7. Extrapolating camera pose from training views. Our method can generalize to different camera poses, including viewpoints not within the training distribution. Top left: We vary the focal length from \u00d70.8 to \u00d71.4 of the original focal length. Top right: We vary the camera position towards the image plane along the z axis. Bottom row: We vary the camera position along the horizontal and vertical axis. which we observe to overfit on training images. Figure 6 shows the CLIP vs. DINO scores for all methods and object categories. Ideally, a method should have both a high CLIP score and a DINO score, but often, there is a trade-off between textand image alignment. Our method has on-par or better text alignment relative to the baselines while having better image alignment. We observe that image-editing baselines often require careful hyperparameter tuning for each image. We select the best-performing hyperparameters and keep them fixed across all experiments. We use the \u223c50 validation camera poses not used during training for evaluation and randomly perturb the camera position or focal length. Figure 11 in Appendix B shows sample training and perturbed validation camera poses for the car object. Camera pose accuracy. Previously, we have measured our method purely on image customization benchmarks. Next, we evaluate the camera pose accuracy as well. Table 2 shows the camera pose accuracy of the generated images in terms of mean angular error and camera center error. We observe that LoRA + Camera pose baseline overfits on training images and can fail at generating images in the correct pose with new text prompts during inference. We evaluate this on validation camera poses of concepts from the CO3Dv2 dataset with the camera\u2019s principal axis pointing towards the object at the scene\u2019s center. This is because RayDiffusion has been trained on this setup of the CO3Dv2 dataset and fails on unique objects. Qualitative comparison. We show the qualitative comparison of our method with the baselines in Figure 4. As we can see, image-editing-based methods often fail at generating photorealistic results. In the case of LoRA + Camera Pose, we observe that it fails to generalize and overfits to the training views (5th row Figure 4). Finally, the 3D editing-based method Vica-NeRF maintains 3D consistency but generates blurred images for text prompts that change the background scene. Figure 5 shows samples with different text prompts and target camera poses for our method. Generalization to novel camera poses. Since our method learns a 3D radiance field, we can also extrapolate to unseen camera poses at inference time as shown in Figure 7. We generate images while varying the camera distance from the object (scale), focal length, or camera position along the horizontal and vertical axis. Applications. Our method can be combined with existing image editing methods as well. Figure 8a shows an example where we use SDEdit [52] to in-paint the object in varying poses while keeping the same background. We can also generate interesting panoramas using MultiDiffusion [3], where the object\u2019s placement in each grid is controlled by our method, as shown in Figure 8b. Moreover, since we learn a 3D consistent FeatureNeRF for the new concept, we can compose multiple instances of the object in feature space [84], with each instance in a different camera pose. Figure 8c shows an example of two teddy bears facing (a) Only changing the object pose with same background (b) A birthday party scene panorama (c) Composing multiple instances of the object Figure 8. Applications. 1st row: Our method can be combined with other image editing methods as well. We use SDEdit with our method to in-paint the rubber duck in different poses while keeping the same background. 2nd row: We can generate interesting panorama shots by controlling the camera pose of the object in each grid independently. 3rd row: We can also compose the radiance field predicted by FeatureNeRF to control the relative pose while generating multiple instances of the object. Method Text Align. Image Align. Camera-pose Accuracy CLIPscore\u2191 foreground\u2191 background\u2193 Angular error \u2193 Camera center error \u2193 Ours 0.248 0.471 0.348 14.19 0.080 w/o Eqn. 6 0.250 0.460 0.340 16.08 0.096 w/o Lbg + Ls 0.239 0.471 0.371 11.83 0.068 Table 3. Ablation experiments. Not enriching volumetric features with text cross-attention (Eqn. 6) has an adverse effect on image alignment. Not having mask-based losses (Eqn. 11) leads to overfitting on training images and decreases the text alignment. The worst performing metrics are grayed. Our final method achieves a balance between the input conditions of the target concept, text prompt, and camera pose. each other sitting on the armchair. Here, we additionally use DenseDiffusion [42] to modulate the attention maps and guide the generation of each object instance to only appear in the corresponding region predicted by FeatureNeRF. At the same time, the attention maps of the empty region predicted by FeatureNeRF are modulated to match the part of the text prompt describing the image\u2019s background. 4.2. Ablation In this section, we perform ablation experiments regarding different components of our method and show its contribution. All ablation studies are done on CO3D-v2 instances Camera extrapolation Composition with an object A cat riding a V* scooty V* chair next to a potted plant Vertical translation Focal length Figure 9. Limitations. Our method can fail when extrapolating camera poses far from the training image camera poses, e.g., changing focal length (top left) or translating camera s.t. the object is not in the center (top right) as the pre-trained model is often biased towards generating the object in the center. Also, it can fail to follow the input text prompt or the exact camera pose when multiple objects are composed in a scene (bottom row). with validation camera poses. Background losses. When removing the silhouette and background loss, as explained in Eqn. 11 from training, we observe a decrease in text alignment and overfitting on training images as shown in Table 3. Figure 12 in Appendix A shows qualitatively that the model generates images with backgrounds more similar to the training views. This is also reflected by the higher similarity between generated images and background regions of the training images (3rd column Table 3) compared to our final method. Text cross-attention in FeatureNeRF. We also enrich the 3D learned features with text cross-attention as shown in Eqn. 6. We perform the ablation experiment of removing this component from the module. Table 3 shows that this leads to a drop in image alignment with the target concept. Thus, cross-attention with text in the volumetric feature space helps the module learn the target concept better. 5. Discussion and Limitations We introduce a new task of customizing text-to-image models with camera viewpoint control. Our method jointly learns 3D feature prediction modules and adapts 2D diffusion attention modules to be conditioned on these features. This enables synthesizing the object with new text prompts and precise object pose control. While our method outperforms existing image editing and model customization approaches, it still has several limitations, as we discuss below. Limitations. As we show in Figure 9, our method struggles at generalizing to extreme camera poses that were not seen during training and resorts to either changing the object identity or generating the object in a seen pose. We expect this to improve by adding more camera pose variations during training. An additional case when our method struggles to follow the camera pose is when the text prompt adds multiple new objects. We hypothesize that in such challenging scenarios, the model is biased towards generating front views, often seen in training. Also, our proposed pose-conditioning module is trained with a finetuning-based method. Exploring pose-conditioning in a zero-shot customization and editing methods [16, 24] may help reduce the time and computation as well. Here, we focus on enabling camera view control while generating rigid objects. Future work includes extending this conditioning to handle dynamic objects that change the pose in between reference views. One potential way to address this is using a representation based on dynamic and non-rigid NeRF methods [21, 66, 84]. Acknowledgment. We are thankful to Kangle Deng, ShengYu Wang, and Gaurav Parmar for their helpful comments and discussion and to Sean Liu, Ruihan Gao, Yufei Ye, and Bharath Raj for proofreading the draft. This work was partly done by Nupur Kumari during the Adobe internship. The work is partly supported by Adobe Research, the Packard Fellowship, the Amazon Faculty Research Award, and NSF IIS-2239076. Grace Su is supported by the NSF Graduate Research Fellowship (Grant No. DGE2140739)." + } + ] +} \ No newline at end of file