diff --git "a/intro_28K/test_introduction_long_2405.05216v1.json" "b/intro_28K/test_introduction_long_2405.05216v1.json" new file mode 100644--- /dev/null +++ "b/intro_28K/test_introduction_long_2405.05216v1.json" @@ -0,0 +1,103 @@ +{ + "url": "http://arxiv.org/abs/2405.05216v1", + "title": "FinePOSE: Fine-Grained Prompt-Driven 3D Human Pose Estimation via Diffusion Models", + "abstract": "The 3D Human Pose Estimation (3D HPE) task uses 2D images or videos to\npredict human joint coordinates in 3D space. Despite recent advancements in\ndeep learning-based methods, they mostly ignore the capability of coupling\naccessible texts and naturally feasible knowledge of humans, missing out on\nvaluable implicit supervision to guide the 3D HPE task. Moreover, previous\nefforts often study this task from the perspective of the whole human body,\nneglecting fine-grained guidance hidden in different body parts. To this end,\nwe present a new Fine-Grained Prompt-Driven Denoiser based on a diffusion model\nfor 3D HPE, named \\textbf{FinePOSE}. It consists of three core blocks enhancing\nthe reverse process of the diffusion model: (1) Fine-grained Part-aware Prompt\nlearning (FPP) block constructs fine-grained part-aware prompts via coupling\naccessible texts and naturally feasible knowledge of body parts with learnable\nprompts to model implicit guidance. (2) Fine-grained Prompt-pose Communication\n(FPC) block establishes fine-grained communications between learned part-aware\nprompts and poses to improve the denoising quality. (3) Prompt-driven Timestamp\nStylization (PTS) block integrates learned prompt embedding and temporal\ninformation related to the noise level to enable adaptive adjustment at each\ndenoising step. Extensive experiments on public single-human pose estimation\ndatasets show that FinePOSE outperforms state-of-the-art methods. We further\nextend FinePOSE to multi-human pose estimation. Achieving 34.3mm average MPJPE\non the EgoHumans dataset demonstrates the potential of FinePOSE to deal with\ncomplex multi-human scenarios. Code is available at\nhttps://github.com/PKU-ICST-MIPL/FinePOSE_CVPR2024.", + "authors": "Jinglin Xu, Yijie Guo, Yuxin Peng", + "published": "2024-05-08", + "updated": "2024-05-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Given monocular 2D images or videos, 3D Human Pose Estimation (3D HPE) aims to predict the positions of human 1 arXiv:2405.05216v1 [cs.CV] 8 May 2024 body joints in 3D space. It is vital in various applications, including self-driving [50, 56], sports analysis [13, 31, 46], abnormal detection [9, 45], and human-computer interaction [11, 25, 42]. Considering the expensive computational costs of directly obtaining 3D human poses from 2D contents, 3D HPE is usually decomposed into two stages: 1) detecting 2D keypoints in images or videos [5, 7, 24, 39], and 2) mapping 2D keypoints to 3D human poses [6, 10, 35, 48, 52]. In this work, we mainly focus on the second stage, estimating 3D human poses given 2D keypoints. Existing monocular 3D HPE methods [4, 6, 10, 17\u2013 19, 27, 28, 35, 36, 43, 44, 47, 48, 52, 54, 59, 61] usually have three challenges as follows: 1) Uncertainty: the depth ambiguity inherently exists in the mapping from 2D skele- tons to 3D ones (one-to-many); 2) Complexity: flexible human body structure, complex inter-joint relationships, and a high limb freedom degree lead to self-occlusion or rare and complicated poses; 3) Generalizability: current publicly available 3D HPE datasets have limited action classes, and thus, the models trained on such data are prone to overfitting and difficult to generalize to more diverse action classes. To address these issues, we consider improving the 3D HPE model performance by enhancing the input information. We found that existing methods ignore accessible texts and naturally feasible knowledge of humans while they promise to provide the model with more guidance. We explicitly utilize (1) the action class of human poses, (2) kinematic information \u201cspeed\u201d, and (3) the way that different human body parts (e.g., person, head, body, arms, and legs) move in human activities to build fine-grained part-aware prompts for the reconstruction task. Specifically, we incorporate a fine-grained part-aware prompt learning mechanism into our framework to drive 3D human pose estimation via vision- language pre-trained models. It is well known that text prompts play a crucial role in various downstream tasks for vision-language pre-training models (e.g., CLIP [30]). However, manually designing prompt templates is expensive and cannot ensure that the final prompt is optimal for the 3D HPE task. Thus, we create a new fine-grained part-aware prompt learning mechanism that adaptively learns modifiers for different human body parts to precisely describe their movements from multiple granularities, including action class, speed, the whole person, and fine-grained human body parts. This new mechanism, coupled with diffusion models, possesses controllable high-quality generation capability, which is beneficial in addressing the challenges of the 3D human pose estimation task. In this work, we propose a Fine-grained Prompt-driven Denoiser (FinePOSE) based on diffusion models for 3D human pose estimation, in Fig. 1, which is composed of a fine-grained part-aware prompt learning (FPP) block, fine-grained prompt-pose communication (FPC) block, and prompt-driven timestamp stylization (PTS) block. Con- cretely, the FPP block encodes three kinds of information about the human pose, including action class, coarse- and fine-grained parts of humans like \u201cperson, head, body, arms, legs\u201d, and kinematic information \u201cspeed\u201d, and integrates them with pose features for serving subsequent processes. Then, the FPC block injects fine-grained part-aware prompt embedding into noise 3D poses to establish fine-grained communications between learnable part-aware prompts and poses for enhancing the denoising capability. To handle 3D poses with different noise levels, the PTS block intro- duces the timestamp coupled with fine-grained part-aware prompt embedding into the denoising process to enhance its adaptability and refine the prediction at each noise level. Our contributions can be summarized as follows: \u2022 We propose a new fine-grained part-aware prompt learning mechanism coupled with diffusion models that possesses human body part controllable high-quality generation ca- pability, beneficial to the 3D human pose estimation task. \u2022 Our FinePOSE encodes multi-granularity information about action class, coarse- and fine-grained human parts, and kinematic information, and establishes fine-grained communications between learnable part-aware prompts and poses for enhancing the denoising capability. \u2022 Extensive experiments illustrate that our FinePOSE ob- tains substantial improvements on Human3.6M and MPI- INF-3DHP datasets and achieves state-of-the-art. More experiments on EgoHumans demonstrate the potential of FinePOSE to deal with complex multi-human scenarios.", + "main_content": "Diffusion Models. Diffusion models [12, 26, 37, 38] are a kind of generative models that sequentially add a series of noise with different levels to the raw data, gradually transforming it from an original data distribution to a noisy distribution, and subsequently reconstructing the original data by denoising. Diffusion models have strong capabilities in many applications, from 2D image or video generation/editing [1\u20133, 16, 49] to 3D human pose estimation/generation [10, 17, 19, 27, 35, 47, 48, 52, 54, 59]. The 3D HPE task, for example, encounters various difficulties, including occlusions, limited training data, and inherent ambiguity in pose representations. Therefore, diffusion models\u2019 ability to generate high-fidelity 3D human poses makes them more suitable for 3D HPE. 3D Human Pose Estimation. Considering that extracting 2D human skeletons from videos or images requires expensive costs, the 3D human pose estimation task is usually divided into two phases: (1) estimating 2D positions of human joints from images or videos [5, 7, 22, 41], and (2) mapping 2D positions to the 3D space to estimate the 3D positions of human joints [4, 6, 10, 17\u2013 19, 27, 28, 35, 36, 43, 47, 48, 52, 54, 59, 61]. In this work, mate the 3D positions of human joints [4, 6, 10, 17\u2013 19, 27, 28, 35, 36, 43, 47, 48, 52, 54, 59, 61]. In this work, we focus on the second phase. Early, TCN [29] used a 2 Diffusion Process Denoising Process Fine-grained Prompt-driven Denoiser (FinePOSE) Fine-grained Part-aware Prompt Learning (FPP) CLIP Training & Inference Add Noise Contaminated 3D poses: Training Fine-grained Prompt-driven Denoiser Fine-grained Prompt-pose Communication (FPC) Fine-grained Prompt-Pose MHCA PTS Spatial MHSA Temporal MHSA spatial temporal spatial-temporal SpatialTemporal MHSA Fine-grained Part-aware Prompts head arms body legs action class person speed 2D poses: Uncontaminated 3D poses: Reconstructed 3D poses: Figure 2. The architecture of the proposed FinePOSE. In the diffusion process, Gaussian noise is gradually added to the ground-truth 3D poses Y0, generating the noisy 3D poses Yt for the timestamp t. In the denoising process, Yt, X and t are fed to fine-grained prompt-driven denoiser D to reconstruct pure 3D poses \u02c6 Y0. D is composed of a Fine-grained Part-aware Prompt learning (FPP) block, a Fine-grained Prompt-pose Communication (FPC) block, and a Prompt-driven Timestamp Stylization (PTS) block, where FPP provides more precise guidance for all human part movements, FPC establishes fine-grained communications between learnable prompts and poses for enhancing the denoising capability, and PTS integrates learned prompt embedding and current timestamp for refining the prediction at each noise level. fully convolutional network based on dilated temporal convolutions over 2D keypoints to estimate 3D poses in video. SRNet [51] proposed a split-and-recombine approach, leading to appreciable improvements in predicting rare and unseen poses. Anatomy [6] decomposed the task into bone direction prediction and bone length prediction, from which the 3D joint locations can be derived entirely. Recently, MixSTE [52] used temporal and spatial transformers alternately to obtain better spatio-temporal features. MotionBERT [59] proposed a pretraining stage to recover the underlying 3D motion from noisy partial 2D observations. GLAGCN [48] globally modeled the spatio-temporal structure for 3D human pose estimation. D3DP [35] proposed the jointlevel aggregation strategy to benefit from all generated poses. Unlike previous methods, our approach proposes a new finegrained part-aware prompt learning mechanism coupled with diffusion models that possess controllable, high-quality generation capability of human body parts, which benefits the 3D human pose estimation task. Prompt Learning. Prompt learning has been widely used in the computer vision community [8, 21, 57, 58]. Typically, CoOp [58] utilized a continuous prompt optimization from downstream data instead of hand-craft design, the pioneering work that brings prompt learning to adapt pre-trained vision language models. CoCoOp [57] extended CoOp by learning image conditional prompts to improve generalization. ProDA [21] learned a prompt distribution over the output embedding space. VPT [8] introduced variational prompt tuning by combining a base learned prompt with a residual vector sampled from an instance-specific underlying distribution. PointCLIPV2 [60] combined CLIP [30] with GPT [20] to be a unified 3D open-world learner. Unlike the above methods, we propose a new fine-grained part-aware prompt learning mechanism, which encodes multi-granularity information about action class, coarseand fine-grained human parts, and kinematic data, and establishes fine-grained communications between learnable part-aware prompts and poses for enhancing the denoising capability. 3. The Proposed Approach: FinePOSE Given a 2D keypoints sequence X \u2208RN\u00d7J\u00d72, constructed by N frames with J joints in each, the proposed approach is formulated to predict the 3D pose sequence Y \u2208RN\u00d7J\u00d73. Considering the high-quality generation capability of the text-controllable denoising process of diffusion models, we develop a Fine-grained Prompt-driven Denoiser (FinePOSE) D for 3D human pose estimation. FinePOSE generates accurate 3D human poses enhanced by three core blocks: Finegrained Part-aware Prompt learning (FPP), Fine-grained Prompt-pose Communication (FPC), and Prompt-driven Timestamp Stylization (PTS) blocks. 3.1. Diffusion-Based 3D Human Pose Estimation Diffusion models are generative models that model the data distribution in the form of p\u03b8(Y0) := R p\u03b8(Y0:T )dY1:T through chained diffusion and reverse (denoising) processes. The diffusion process gradually adds Gaussian noise into the ground truth 3D pose sequence Y0 to corrupt it into an approximately Gaussian noise Yt(t\u2192T) using a variance schedule {\u03b2t}T t=1, which can be formulated as \\la b el {e q1 } q\\ l e f t ( \\mathbf {Y}_{t}\\mid \\mathbf {Y}_{0}\\right ):=\\sqrt {\\bar {\\alpha }_{t}} \\mathbf {Y}_{0}+\\epsilon \\sqrt {1-\\bar {\\alpha }_{t}}, (1) where \u00af \u03b1t :=Qt s=0\u03b1s and \u03b1t :=1\u2212\u03b2t. Afterward, the denoising process reconstructs the uncontaminated 3D poses by a 3 denoiser D. Since the degraded data is well approximated by a Gaussian distribution after the diffusion process, we can obtain initial 3D poses YT by sampling noise from a unit Gaussian. Passing YT (t = T) to the denoiser D, we obtain \u02c6 Y0 that is thereafter used to generate the noisy 3D poses \u02c6 Yt \u2212 1 as inputs to the denoiser D at timestamp t\u22121 via DDIM [37], which can be formulated as \\ l a be l { e q :DDIM } \\ m at h bf { Y }_{t\\!-\\!1}=\\sqrt {\\bar {\\alpha }_{t\\!-\\!1}}\\hat {\\mathbf {Y}}_0\\!+\\!\\epsilon _t\\sqrt {1\\!-\\!\\bar {\\alpha }_{t\\!-\\!1}\\!-\\!\\sigma ^2_t}\\!+\\!\\sigma _t\\epsilon , (2) where t is from T to 1, \u03f5 \u223cN(0, I) is standard Gaussian noise independent of Yt, and \\ e psilo n _ t &= \\ l e ft (\\m athb f { Y }_t \\ !\\ !\\ s qrt {\\b a r {\\a lpha } _ t}\\cdot \\hat {\\mathbf {Y}}_0\\right )/\\sqrt {1\\!-\\!\\bar {\\alpha }_t}, \\\\ \\sigma _t&=\\sqrt {\\left (1\\!-\\!\\bar {\\alpha }_{t\\!-\\!1}\\right )/\\left (1\\!-\\!\\bar {\\alpha }_t\\right )}\\cdot \\sqrt {1\\!-\\!(\\bar {\\alpha }_t/\\bar {\\alpha }_{t\\!-\\!1})}, (3b) where \u03f5t is the noise at timestamp t, and \u03c3t controls how stochastic the diffusion process is. 3.2. Fine-grained Prompt-driven Denoiser Fine-grained Part-aware Prompt Learning (FPP). To assist the reconstruction of pure 3D poses \u02c6 Y0 from contaminated 3D poses Yt with additional information, FinePOSE guides the denoising process with regular 2D keypoints X, timestamp t, and fine-grained part-aware prompt embedding P. We design the FPP block to learn P. It encodes three pose-related information in the prompt embedding space, including its action class, coarseand fine-grained parts of humans like \u201cperson, head, body, arms, legs\u201d, and kinematic information \u201cspeed\u201d. Afterward, P is integrated with pose features for subsequent processes. A learnable prompt embedding P = {p}K k=1 is with the shape of K \u00d7 L \u00d7 D, where K denotes the number of text prompts, L indicates the number of tokens in each text prompt, and D is the dimension of token embedding. Since the number of valid tokens is found to be three to four through the text encoder Etx, the first four tokens are taken as representations \u02dc pk for each text. Moreover, since modifiers help precisely describe the movements of human body parts, we design a learnable vector rk \u2208R(Lk\u22124)\u00d7D to wrap the representations as pk. The above can be formulated as \\ t ilde {\\bm {p }}_ k & =\\m ath cal {E } _{\\text {t x }}(\\text {text}_k)[:4],\\ k \\in [1, K],\\\\ \\bm {p}_k&=\\text {Concat}(\\bm {r}_k, \\tilde {\\bm {p}}_k), (4b) where K = 7 and {textk}7 k=1 indicate {person, [Action Class], speed, head, body, arms, legs}. rk is initialized with Gaussian distribution of \u00b5 = 0 and \u03c3 = 0.02, and {Lk}7 k=1 ={7, 12, 10, 10, 10, 14, 14}, which sums to 77 regarding the text embedding dimension of CLIP [30]. In short, the FPP block builds multi-granularity text prompts and learnable modifiers, providing precise guidance for each human body part, as shown in Fig. 2. Fine-grained Prompt-pose Communication (FPC). After obtaining fine-grained part-aware prompt embedding P, we establish fine-grained communications between learned partaware prompts and poses using the FPC block to improve the denoising quality. Specifically, when processing the noised 3D poses Yt, it injects prompt embedding P, 2D keypoints X, and timestamp t within. First, FPC integrates Yt and guidance information (i.e., X, t, and P) by a series of concatenation and addition operations, as Zt = Concat(Yt, X)+P[L]+F(t). F is the timestamp embedding network containing a sinusoidal function followed by two Linear layers connected by a GELU non-linearity. The timestep embedding adaptively adjusts the quantity of Gaussian noise additions. Since the denoiser D works iteratively, providing detailed information about the current timestamp t is crucial for D to handle 3D poses containing different noise levels effectively. Then, Zt is encoded by a spatial transformer, where the multi-head self-attention (MHSA) mechanism helps to focus on the fine-grained relationships between joints within each frame, obtaining Zs t. To completely inject prompt embedding P into Zs t, we implement a multi-head cross-attention model, where the query, key, and value are as Q = WQZs t, K = WKP, V = WV P. The value is aggregated with cross-attention A to generate fine-grained prompt-driven pose features Zsp t , achieving fine-grained prompt-pose communication. The mechanism can be formulated as \\mathbf { A }&= \\ tex t {s oft m a x } (\\ m ath b f {Q}\\ o times \\mathbf {K}^\\top /\\sqrt {d}),\\\\ \\mathbf {Z}_t^{sp}&=\\mathbf {A}\\otimes \\mathbf {V},\\ \\tilde {\\mathbf {Z}}_t^{sp}=\\mathcal {P}(\\mathbf {Z}_t^{sp}), (5b) where d = D/H and H is the number of attention heads. P indicates the PTS block that bring timestamp t into the generation process to obtain timestamp stylized output \u02dc Zsp t . On the other hand, to model inter-frame relationships between poses, \u02dc Zsp t is encoded using a temporal transformer via MHSA to obtain \u02dc Zspf t . Finally, we utilize a spatialtemporal transformer accompanied by permutation operations between spatial and temporal dimensions to extract more compact fine-grained prompt-driven pose features from \u02dc Zspf t , which are decoded as the predicted 3D poses \u02c6 Y0. Prompt-driven timestamp Stylization (PTS). As mentioned, providing timestamp embedding to the denoising process is critical for handling 3D poses with different noise levels. Therefore, inspired by Motiondiffuse [53], we introduce the PTS block that explicitly embeds timestamp t by positional embedding [40] and sums it with the learnable prompt embedding P obtained by the FPP block, as v=P[L]+F(t). Given the intermediate output Zsp t of the FPC block, the PTS block calculates \u02dc Zsp t = Zsp t \u00b7 \u03c8w(\u03d5(v))+\u03c8b(\u03d5(v)), where \u03c8b, \u03c8w, \u03d5 are three different linear projections, and (\u00b7) is the Hadamard product. 4 Method N Human3.6M (DET) Human3.6M (GT) Year Detector MPJPE \u2193 P-MPJPE \u2193 Detector MPJPE \u2193 P-MPJPE \u2193 TCN [29] 243 CPN 46.8 36.5 GT 37.8 / CVPR\u201919 Anatomy [6] 243 CPN 44.1 35.0 GT 32.3 / CSVT\u201921 P-STMO [33] 243 CPN 42.8 34.4 GT 29.3 / ECCV\u201922 MixSTE [52] 243 HRNet 39.8 30.6 GT 21.6 / CVPR\u201922 PoseFormerV2 [54] 243 CPN 45.2 35.6 GT 35.5 / CVPR\u201923 MHFormer [19] 351 CPN 43.0 34.4 GT 30.5 / CVPR\u201922 Diffpose [10] 243 CPN 36.9 28.7 GT 18.9 / CVPR\u201923 GLA-GCN [48] 243 CPN 44.4 34.8 GT 21.0 17.6 ICCV\u201923 ActionPrompt [55] 243 CPN 41.8 29.5 GT 22.7 / ICME\u201923 MotionBERT [59] 243 SH 37.5 / GT 16.9 / ICCV\u201923 D3DP [34] 243 CPN 35.4 28.7 GT 18.4 / ICCV\u201923 FinePOSE (Ours) 243 CPN 31.9 25.0 GT 16.7 12.7 (-3.5) (-3.7) (-0.2) (-4.9) Table 1. Quantitative comparison with the state-of-the-art 3D human pose estimation methods on the Human3.6M dataset. N: the number of input frames. CPN, HRNet, SH: using CPN [7], HRNet [39], and SH [24] as the 2D keypoint detectors to generate the inputs. GT: using the ground truth 2D keypoints as inputs. The best and second-best results are highlighted in bold and underlined formats. 3.3. Training & Inference Training. The contaminated 3D poses Yt is sent to a finegrained prompt-driven denoiser D to reconstruct the 3D poses \u02c6 Y0 =D(Yt, X, t, P) without noise. The entire framework is optimized by minimizing the MSE loss \u2225Y0 \u2212\u02c6 Y0\u22252. Inference. Since the distribution of YT is nearly an isotropic Gaussian distribution, we sample H initial 3D poses {Yh T }H h=1 from a unit Gaussian. After passing them to the denoiser D, we obtain H feasible 3D pose hypotheses { \u02c6 Yh 0}H h=1. Each hypothesis \u02c6 Yh 0 is used to generate the noisy 3D poses \u02c6 Yh t\u22121 as inputs to the denoiser D for the next timestamp t\u22121. Then, we regenerate { \u02c6 Yh 0}H h=1 using { \u02c6 Yh t \u2212 1}H h=1 as inputs to the denoiser D for the next timestamp t\u22122. Analogously, this process iterates M times starting from the timestamp T, so each iteration m \u2208[1, M] is with the timestamp t=T(1\u2212m M ). Following Joint-Wise ReprojectionBased Multi-Hypothesis Aggregation (JPMA) in [35], we reproject { \u02c6 Yh 0}H h=1 to the 2D camera plane using known or estimated intrinsic camera parameters and then choose joints with minimum projection errors with the input X, as h '&= \\ma thop {\\ arg \\ mi n }\\l i mits _{ h\\in [1,H] } \\ |\\m a thca l {P} _R(\\hat {\\mathbf {Y}}_0^h)[j]-\\mathbf {X}[j]\\|_2,\\\\ \\hat {\\mathbf {Y}}_0[j]&=\\hat {\\mathbf {Y}}_0^{h'}[j],\\ j\\in [1,J], (6b) where PR is the reprojection function, j is the index of joints, and h\u2032 indicates the index of selected hypothesis. JPMA enables us to select joints from distinct hypotheses automatically to form the final prediction \u02c6 Y0. 3.4. Extension to 3D Multi-Human Pose Estimation We append a post-integration to FinePOSE to apply for the multi-human scenario, avoiding incorporating extra computational cost. Specifically, given a multi-human 2D keypoints sequence Xmul \u2208RC\u00d7N\u00d7J\u00d72, which involves C human characters, FinePOSE first predicts \u02c6 Yc 0 for each character c \u2208[1, C]. Considering that some characters may temporarily leave the camera field of view, their positions in those frames are set as zeros to ensure synchronization of all characters\u2019 states in Xmul. Next, we integrate { \u02c6 Yc 0}C c=1 by stacking over the character dimension, obtaining the final prediction \u02c6 YC 0 \u2208RC\u00d7N\u00d7J\u00d73. 4. Experiments 4.1. Datasets and Metrics Human3.6M [14] is a widely used benchmark dataset in human pose estimation tasks, which provides a large-scale collection of accurate 3D joint annotations on diverse human activities. Human3.6M consists of 3.6 million RGB images, captured from multiple camera views, of 11 professional actors performing 15 activities, e.g., walking, running, and jumping. Following previous efforts [19, 29, 34], our FinePOSE is trained on five subjects (S1, S5, S6, S7, S8) and evaluated on two subjects (S9, S11). We calculate the mean per joint position error (i.e., MPJPE) to measure the average Euclidean distance in millimeters between the ground truth and estimated 3D joint positions for evaluation. We also report procrustes MPJPE (i.e., P-MPJPE) that calculates MPJPE after aligning the estimated poses to the ground truth using a rigid transformation. MPI-INF-3DHP [23] provides synchronized RGB video sequences with accurate 3D joint annotations for 3D human pose estimation. It comprises 8 activities conducted by 8 actors in the training set, while the test set encompasses 7 activities. We calculate MPJPE, the percentage of correctly estimated keypoints (i.e., PCK) within a 150mm range, and the area under the curve (i.e., AUC). EgoHumans [15] collects multi-human ego-exo videos covering 7 sports activities. Recently, a subset of 2D to 3D 5 Method / MPJPE \u2193 Human3.6M (DET) Dir. Disc. Eat Greet Phone Photo Pose Pur. Sit SitD. Smoke Wait WalkD. Walk WalkT. Avg TCN [29] 45.2 46.7 43.3 45.6 48.1 55.1 44.6 44.3 57.3 65.8 47.1 44.0 49.0 32.8 33.9 46.8 SRNet [51] 46.6 47.1 43.9 41.6 45.8 49.6 46.5 40.0 53.4 61.1 46.1 42.6 43.1 31.5 32.6 44.8 RIE [32] 40.8 44.5 41.4 42.7 46.3 55.6 41.8 41.9 53.7 60.8 45.0 41.5 44.8 30.8 31.9 44.3 Anatomy [6] 41.4 43.5 40.1 42.9 46.6 51.9 41.7 42.3 53.9 60.2 45.4 41.7 46.0 31.5 32.7 44.1 P-STMO [33] 38.9 42.7 40.4 41.1 45.6 49.7 40.9 39.9 55.5 59.4 44.9 42.2 42.7 29.4 29.4 42.8 MixSTE [52] 36.7 39.0 36.5 39.4 40.2 44.9 39.8 36.9 47.9 54.8 39.6 37.8 39.3 29.7 30.6 39.8 PoseFormerV2 [54] 45.2 MHFormer [19] 39.2 43.1 40.1 40.9 44.9 51.2 40.6 41.3 53.5 60.3 43.7 41.1 43.8 29.8 30.6 43.0 Diffpose [10] 33.2 36.6 33.0 35.6 37.6 45.1 35.7 35.5 46.4 49.9 37.3 35.6 36.5 24.4 24.1 36.9 GLA-GCN [48] 41.3 44.3 40.8 41.8 45.9 54.1 42.1 41.5 57.8 62.9 45.0 42.8 45.9 29.4 29.9 44.4 ActionPrompt [55] 37.7 40.2 39.8 40.6 43.1 48.0 38.8 38.9 50.8 63.2 42.0 40.0 42.0 30.5 31.6 41.8 MotionBERT [59] 36.1 37.5 35.8 32.1 40.3 46.3 36.1 35.3 46.9 53.9 39.5 36.3 35.8 25.1 25.3 37.5 D3DP [34] 33.0 34.8 31.7 33.1 37.5 43.7 34.8 33.6 45.7 47.8 37.0 35.0 35.0 24.3 24.1 35.4 FinePOSE (Ours) 31.4 31.5 28.8 29.7 34.3 36.5 29.2 30.0 42.0 42.5 33.3 31.9 31.4 22.6 22.7 31.9 (-1.6) (-3.3) (-2.9) (-2.4) (-3.2) (-7.2) (-5.6) (-3.6) (-3.7) (-5.3) (-3.7) (-3.1) (-3.6) (-1.7) (-1.4) (-3.5) Table 2. Quantitative comparison with the state-of-the-art 3D human pose estimation methods on the Human3.6M dataset using 2D keypoint detectors to generate the inputs. Dir., Disc.,\u00b7 \u00b7 \u00b7 , and WalkT. correspond to 15 action classes. Avg indicates the average MPJPE among 15 action classes. The best and second-best results are highlighted in bold and underlined formats. Method N MPI-INF-3DHP Year PCK\u2191 AUC\u2191 MPJPE \u2193 TCN [29] 81 86.0 51.9 84.0 CVPR\u201919 Anatomy [6] 81 87.9 54.0 78.8 CSVT\u201921 P-STMO [33] 81 97.9 75.8 32.2 ECCV\u201922 MixSTE [52] 27 94.4 66.5 54.9 CVPR\u201922 PoseFormerV2 [54] 81 97.9 78.8 27.8 CVPR\u201923 MHFormer [19] 9 93.8 63.3 58.0 CVPR\u201922 Diffpose [10] 81 98.0 75.9 29.1 CVPR\u201923 GLA-GCN [48] 81 98.5 79.1 27.8 ICCV\u201923 D3DP [34] 243 98.0 79.1 28.1 ICCV\u201923 FinePOSE (Ours) 243 98.9 80.0 26.2 (+0.4) (+0.9) (-1.6) Table 3. Quantitative comparison with the state-of-the-art 3D human pose estimation methods on the MPI-INF-3DHP dataset using ground truth 2D keypoints as inputs. N: the number of input frames. The best and second-best results are highlighted in bold and underlined formats. keypoints annotations has been released covering tagging, lego-assembling, and fencing. It contains 105 RGB videos taken by ego cameras. Between 1 and 3 human characters appear in each video, resulting in a total of 238 subsequences. We report the average MPJPE per video. 4.2. Implementation Details We take MixSTE [52] as the backbone of the denoiser D and CLIP as the frozen text encoder Etx. The numbers of MHSA-MLP-LN building blocks of the spatial, temporal, and spatio-temporal transformer in the FPC block are 1, 1, and 3. The training epoch in all the experiments below is 100, and the batch size is 4. We adopt AdamW optimizer with the momentum parameters of \u03b21 = 0.9, \u03b22 = 0.999, and the weight decay of 0.1. The learning rate starts from 6e\u22125 and shrinks after each epoch with a factor of 0.993. For fair Method Human3.6M (DET) MPJPE \u2193 P-MPJPE \u2193 w/o Prompt 37.2 29.1 M-Prompt 35.8 28.1 S-Prompt 36.2 28.9 C-Prompt 34.7 27.4 AL-Prompt 34.6 27.4 FinePOSE (Ours) 31.9 25.0 Table 4. Ablation study on different designs of prompt learning in the FPP block. w/o Prompt: without any textual information and learnable prompts. M-Prompt: using the action class to design the prompt manually. S-Prompt: using a learnable prompt combined with the action class. C-Prompt: employing the action class and coarse-grained information to create the prompt. AL-Prompt: only learnable prompts without any manual design. comparisons, we set the number of hypotheses H = 1 and iterations M = 1 during training, and H = 20 and M = 10 during inference, as in D3DP [34]. 4.3. Comparison with the State-of-the-Arts Human3.6M. Tab. 1 reports comparisons between our FinePOSE with state-of-the-art (SOTA) 3D HPE methods on the Human3.6M dataset. FinePOSE significantly achieves new SOTA performance, especially when using detected 2D keypoints as inputs. Compared with existing 3D HPE methods, FinePOSE surpasses the SOTA method D3DP [34] by 3.5mm in MPJPE and 3.7mm in P-MPJPE. When using ground truth 2D keypoints as inputs, FinePOSE also significantly outperforms the SOTA method MotionBERT [59], improving MPJPE by 0.2mm. Tab. 2 provides detailed comparisons between on each action class using 2D keypoint detectors as inputs. For example, our FinePOSE achieves noticeable improvements (43.7mm\u219236.5mm) for the ac6 Method Configuration MPJPE \u2193 P-MPJPE \u2193 FPP FPC PTS Baseline 37.2 29.1 w FPP \u2713 35.3 28.0 w/o FPP \u2713 37.1 29.2 w/o FPC \u2713 \u2713 35.7 27.8 w/o PTS \u2713 \u2713 36.6 29.0 FinePOSE (Ours) \u2713 \u2713 \u2713 31.9 25.0 Table 5. Ablation study on different configurations of FinePOSE on Human3.6M using 2D keypoint detectors as inputs. Baseline: the method without any textual information via prompt learning. w FPP: the method only contains the FPP block and adds P[L] to the input. w/o FPP: the method without the FPP block leads to an infeasible FPC block. w/o FPC: the method without the FPC block. w/o PTS: the method without the PTS block. tion class \u201cPhoto\u201d and decreases average MPJPE by 3.5mm (35.4mm\u219231.9mm). MPI-INF-3DHP. Tab. 3 reports comparisons between our FinePOSE and SOTA 3D HPE methods on the MPI-INF3DHP dataset, using ground truth 2D keypoints as inputs. Compared with the SOTA existing method GLA-GCN [48], FinePOSE decreases MPJPE by 1.6mm and increases the PCK by 0.4% and AUC by 0.9%. Overall, these experimental results demonstrate that our FinePOSE benefits from fine-grained part-aware prompt learning and pose-prompt communications, resulting in higher denoising quality and estimation accuracy. 4.4. Ablation Study We conduct a series of analysis experiments of our FinePOSE on the Human3.6M dataset to investigate the effects on the performance of different prompt learning designs in the FPP block and different blocks in FinePOSE. Effects of Different Designs in FPP. We design various versions of the FPP block for our FinePOSE, including a) w/o Prompt, b) M-Prompt, c) S-Prompt, d) C-Prompt, and e) ALPrompt. Specifically, w/o Prompt denotes FinePOSE without introducing textual information and learnable prompts. MPrompt indicates using the action class to design the prompt manually instead of the FPP block. Taking the action class \u201cDirections\u201d as an example, the manually designed prompt is \u201ca person is pointing directions with hands\u201d. There are 15 action classes available in the Human3.6M dataset corresponding to 15 kinds of manually designed prompts. S-Prompt indicates utilizing learnable prompts combined with the action class. C-Prompt indicates employing the action class and coarse-grained information like \u201cperson\u201d and \u201cspeed\u201d to create the prompt. Finally, AL-Prompt means only using learnable prompts without any manual design. We first evaluate the effect of manually designed prompts (i.e., M-Prompt) on Human3.6M. As shown in Tab. 4, compared to w/o Prompt, M-Prompt achieves a decrease of 1.4mm on MPJPE and 1.0mm on P-MPJPE, indicating that Method / MPJPE \u2193 EgoHumans Tag. Lego Fenc. Avg D3DP [35] 30.7 29.0 46.6 35.4 FinePOSE (Ours) 30.0 26.7 46.2 34.3 (-0.7) (-2.3) (-0.4) (-1.1) Table 6. Quantitative comparison with D3DP on the EgoHumans dataset using 2D keypoints as inputs. Tag., Lego, and Fenc. correspond to 3 action classes. Avg indicates the average MPJPE among 3 action classes. manually designing prompts is a practical strategy even though they cannot guarantee the prompt is optimal during the denoising process for the 3D HPE task. To evaluate the effectiveness of S-Prompt, we compare it with w/o Prompt. As shown in Tab. 4, MPJPE and P-MPJPE are reduced by 1.0mm and 0.2mm, respectively, for S-Prompt, which demonstrates that with the help of learnable prompts, integrating textual information can improve the performance on 3D HPE task. While compared to M-Prompt, S-Prompt results in performance degradation, indicating that learnable prompts must be meticulously designed. In addition, we also investigate the impact of manual intervention degrees on 3D HPE performance using two groups of comparative experiments. In the first group, we used only learnable prompts without any textual information and manual intervention, named AL-Prompt, which differs from S-Prompt with the action class. The second group designed a coarse-grained prompt involving action class, \u201cperson\u201d, \u201cspeed\u201d, and corresponding learnable prompts, denoted as C-Prompt. We see that both AL-Prompt and C-Prompt outperform S-Prompt since AL-Prompt is without interference from uncomplete textual information and C-Prompt contains some important textual information like action class, \u201cperson\u201d, and \u201cspeed\u201d, which provide the action subject and kinematic data. Finally, it is observed that our FinePOSE outperforms various versions of prompt learning on both MPJPE and P-MPJPE, indicating the effectiveness of the fine-grained part-aware prompt learning mechanism in FinePOSE. Effects of Different Blocks in FinePOSE. In Tab. 5, we provide different settings of our FinePOSE to evaluate the effects of different blocks for the 3D HPE performance, including Baseline, w FPP, w/o FPP, w/o FPC, and w/o PTS. Specifically, Baseline denotes FinePOSE without introducing textual information and learnable prompts, the same as the configuration of w/o Prompt. w FPP indicates FinePOSE only contains the FPP block without introducing the FPC and PTS blocks and only adds textual information P[L] to the input. w/o FPP denotes FinePOSE without the FPP block, leading to the FPC block being infeasible and only utilizing the PTS block. w/o FPC means FinePOSE without the FPC block but using the FPP and PTS blocks. w/o PTS refers to FinePOSE without the PTS block but using the FPP and FPC blocks to integrate textual information for fine-grained 7 SittingDown MotionBERT D3DP FinePOSE WalkDog Sitting Purchases Discussion Photo Posing Figure 3. Qualitative comparisons of our FinePOSE with MotionBERT [59] and D3DP [34] on Human3.6M. The gray skeleton is the ground-truth 3D pose. The blue skeleton represents the prediction of the human left part, and the orange indicates the human right part. The red dashed line represents the incorrect regions of the compared methods, and the blue dashed line indicates the counterparts of FinePOSE. part-aware prompt learning. Compared w FPP and Baseline, we observe that the former can achieve 1.9mm and 1.1mm improvements on MPJPE and P-MPJPE. This is because our FinePOSE contains the FPP block, which adds the prompt embedding P[L] into the input Zt of denoiser D, significantly improving the denoising capability. We observe that the results between w/o FPP and Baseline are almost equivalent. The baseline has already brought timestamp t into the denoising process, while the PTS block refines the prediction at each noise level by reusing the timestamp to the denoising process after the FPP and FPC block. Thus, there is nearly no effect in adding only the PTS block without FPP and FPC blocks to the denoiser. Making a comparison between w/o FPC and w/o FPP, the former achieves a decrease of 1.4mm on both MPJPE and P-MPJPE over w/o FPP, indicating that the FPP block in the denoiser plays a critical role in the fine-grained part-aware prompt learning mechanism. Finally, we observe that FinePOSE achieves a decrease of 4.7mm on MPJPE and 4.0mm on P-MPJPE compared to w/o PTS, indicating the necessity to integrate learned prompt embeddings and timestamps in the PTS block. 4.5. Results on 3D Multi-Human Pose Estimation In real-world applications, the multi-human scenario is more common than the single-human one. However, its complexity hinders existing work from handling it. In Sec. 3.4, we present a post-integration to extend FinePOSE for the multihuman pose estimation task. We implemented the extension using the SOTA method D3DP for a convincing comparison. The experimental results on EgoHumans are reported in Tab. 6, demonstrating that (1) the integration strategy indeed has potential feasibility and (2) FinePOSE has a dominant performance even in the complex multi-human scenario. 4.6. Visualization Fig. 3 shows the visualization results of D3DP [35], MotionBERT [59] and our FinePOSE on Human3.6M. These methods have performed well for actions in which the body, legs, and other parts of the person in the scene are relatively clear. For the actions with simple shapes, e.g., \u201cDiscussion\u201d and \u201cPhoto\u201d, the 3D poses predicted by FinePOSE match better with ground-truth 3D poses than those of D3DP and MotionBERT, especially in the left knee, right arm, and right hip of \u201cDiscussion\u201d and in the left knee of \u201cPhoto\u201d. For the actions with complex shapes, e.g., \u201cSitting\u201d and \u201cSittingDown\u201d, FinePOSE is more accurate at various joints, especially for arms and legs, while the 3D poses predicted by D3DP and MotionBERT differ significantly from groundtruth 3D poses. 5. Conclusion and Discussion This work has presented FinePOSE, a new fine-grained prompt-driven denoiser for 3D human pose estimation. FinePOSE was composed of FPP, FPC, and PTS blocks. FPP learned fine-grained part-aware prompts to provide precise guidance for each human body part. FPC established fine-grained communication between learnable part-aware prompts and poses to enhance denoising capability. PTS brought timestamp information to the denoising process, strengthening the ability to refine the prediction at each noise level. Experimental results on two benchmarks demonstrated that FinePOSE surpasses the state-of-the-art methods. We have also extended FinePOSE from single-human scenarios to multi-human ones, exhibiting that our model performs well in complex multi-human scenarios. Limitations. FinePOSE is not designed explicitly for the multi-person scenario. The diffusion model-based 3D HPE method is relatively computationally expensive. 8", + "additional_info": [ + { + "url": "http://arxiv.org/abs/2404.10859v1", + "title": "Forcing Diffuse Distributions out of Language Models", + "abstract": "Despite being trained specifically to follow user instructions, today's\nlanguage models perform poorly when instructed to produce random outputs. For\nexample, when prompted to pick a number uniformly between one and ten\nLlama-2-13B-chat disproportionately favors the number five, and when tasked\nwith picking a first name at random, Mistral-7B-Instruct chooses Avery 40 times\nmore often than we would expect based on the U.S. population. When these\nlanguage models are used for real-world tasks where diversity of outputs is\ncrucial, such as language model assisted dataset construction, their inability\nto produce diffuse distributions over valid choices is a major hurdle. In this\nwork, we propose a fine-tuning method that encourages language models to output\ndistributions that are diffuse over valid outcomes. The methods we introduce\ngeneralize across a variety of tasks and distributions and make large language\nmodels practical for synthetic dataset generation with little human\nintervention.", + "authors": "Yiming Zhang, Avi Schwarzschild, Nicholas Carlini, Zico Kolter, Daphne Ippolito", + "published": "2024-04-16", + "updated": "2024-04-16", + "primary_cat": "cs.CL", + "cats": [ + "cs.CL", + "cs.LG" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Consider a Dungeon Master (DM) trying to use a language model based chatbot to assist in man- aging their Dungeons & Dragons campaign. The DM asks the chatbot to suggest a random name for a character in the story. The first time she asks, it suggests \u201cAnya,\u201d and the second time it also suggests \u201cAnya.\u201d In fact, almost 40% of the time, the suggested name will be \u201cAnya\u201d even when the language model is deployed with full random sampling. The DM then tries to use the chatbot to roll a twenty-sided die; over 60% of the dice rolls come up as a 14. Frustrated, the DM gives up and brings out their physical dice. Language models are extremely bad at producing random outputs when users want them to. Even when prompts are carefully constructed with instructions that encourage randomness, both state- of-the-art open-source and industry language models output very low-entropy distributions over the valid options. Beyond Dungeons & Dragons, there are many practical applications where di- versity across valid options is crucial for language model outputs. For example, when language models are used to answer multiple choice or Likert-scale questions, a priori each option should be equally likely. When they are used for synthetic dataset construction, such as for synthetic bi- ographies (Maini et al., 2024; Yuan et al., 2021) or instruction-tuning train sets (Wang et al., 2023), diversity in the generations is crucial but arduous to achieve through mere prompt hacking. In this work, we examine just how off language model generations are from user expectations of randomness and diversity. We then show how language models can be fine-tuned to produce dif- fuse distributions over valid options, without sacrificing generation quality. Our method supports tasks where the sample set of valid options is not easily enumerated, and we show that models fine- tuned to produce diffuse probabilities for one set of tasks generalize to other, very different tasks. This generalization allows us to promote diversity in complex settings, such as synthetic dataset \u2217Correspondence: Yiming Zhang, yimingz3@cs.cmu.edu. 1Code and data are available at https://github.com/y0mingzhang/diffuse-probabilities. 1 arXiv:2404.10859v1 [cs.CL] 16 Apr 2024 0% 10% 20% 30% 40% Empirical probability Gemma (baseline) Gemma (tuned) Llama-2 (baseline) Llama-2 (tuned) Mistral (baseline) Mistral (tuned) Anya Elara Erin Eleanor Sophia Marissa Olivia Marla Morgan Elysian Talan Mila Yasmin Dane Samson Hector Kassidy Monica Aubrie Kiera Aurora Luna Evangeline Elara Arabella Florence Elora Raven Evelyn Rowan Alina Karina Lily Zoe Jamar Nevaeh Dana Talia Yurem Keira Avery Emily Oliver Harper Ella Emma Alice Alexander Olivia Sophia Princess Gabriela Natalya Wyatt Anika Brennan Talia Yareli Justice Brian (a) BABY NAMES (top 10 names are shown). 0% 20% 40% 60% Empirical probability 5 6 7 8 4 3 2 9 1 10 10 4 1 3 6 2 5 9 8 7 5 7 6 8 4 3 9 1 2 10 10 2 7 1 9 6 3 5 8 4 8 7 5 6 1 4 3 9 2 10 4 9 8 7 10 1 5 3 2 6 (b) RANDOM NUMBER GENERATION. Figure 1: Language models do not produce diffuse probabilities. The output distributions of baseline Gemma, Llama-2, and Mistral models deviate from what we expect from natural/random distributions. Our tuning method addresses this issue by diffusing the output distribution over valid candidates. In each plot, the horizontal axis is sorted in descending order by probability of the specific output. generation. On the task of generating a dataset of synthetic biographies (similar to the dataset cre- ated by Maini et al. (2024) for benchmarking language model unlearning), our method generates four times as many unique first names, three times as many unique birth places, and 1.5 times as many unique careers as the baseline model, all without any need for complex prompt engineering, decoding strategy tweaking or manual re-writing.", + "main_content": "We begin with a formal definition of diffuse probabilities from language models. We then introduce techniques for measuring diversity of models outputs and we show how to quantify the differences in the observed output distributions and the desired distributions. 2.1 Problem setting Consider a vocabulary V = {1, 2, ..., n}. An autoregressive language model takes a sequence of \u2113 tokens x \u2208V\u2113as input and outputs a probability distribution p\u03b8(\u00b7 | x) \u2208\u2206(V) over all tokens in the vocabulary. We use \u2206(V) to denote the probability simplex over V. In generation, we are mostly interested in computing the probability of a multi-token target V V In generation, we are mostly interested in computing the probability of a multi-token target y = [y1, y2, . . . , yn] \u2208Vn (e.g., y could be a two-digit number or the biography of a person). An autoregressive language model factors the probability of a sequence y into the product of probabilities 2 of individual tokens:2 p\u03b8(y | x) = p\u03b8(y1 | x)p\u03b8(y2 | x \u2295y1) \u00b7 \u00b7 \u00b7 p\u03b8(yn | x \u2295y 10 to match the entropy of our fine-tuned models in Section 4.2. 7Using birth data from the US Social Security Administration. 8Coverage and entropy results are reported in Appendix B.1. 5 0 20 40 60 80 100 Random number in [1, x] 0 1 2 3 4 5 Entropy Baseline Tuned Ideal (a) Varying prompt formats and number ranges. 0 20 40 60 80 100 Size of random sample space 0 1 2 3 4 5 Entropy Baseline Tuned Ideal (b) Varying sizes of random sample space. Figure 2: Models tuned on RANDOM NUMBER GENERATION demonstrate generalization to variations in both prompt format and number ranges. 95% confidence intervals are shown in plots. and sample space differ between training and evaluation. We fine-tune Llama-2 to produce diffuse probabilities over two ranges of random integers\u2014one to ten and one to 100\u2014and test its generalization to different instruction formats and unseen sample spaces (see Appendix A.1). In Figure 2a, we observe encouraging generalization trends: the tuned model produces near uniform distributions for unseen prompt formats over number ranges not in the fine-tuning set, for example from 1 to 45. When we vary the size of the random sample space of RNG (e.g., 154 to 204), the tuned model still produces substantially higher entropy distributions than the baseline model (Figure 2b). With these first experiments, we demonstrate that tuning models for diffuse probabilities is a promising method for increasing generation diversity. 5 Generalization Across Tasks It is not too surprising that a model optimized to output random numbers uniformly across some range can generalize to other ranges of numbers. A much more interesting and practical test is if a model optimized for diffuse probabilities on one set of tasks can transfer to tasks with very different sample spaces. That is, a model optimized for picking a random number or baby names should not overfit to generating samples from those distributions; rather, it should be able to e.g. pick a random fruit or country name when prompted to do so. We use leave-one-out experiments on a set of six tasks to show that models tuned for diffuse probabilities do, in fact, have strong transferrability to tasks unseen during tuning. We consider the following six tasks. Each task has several associated prompts, one of which is shown below: 1. BABY NAMES: \u201cPlease generate an English first name, chosen completely at random.\u201d 2. COUNTRIES: \u201cOutput a random country in Africa, chosen completely at random.\u201d 3. FRUITS: \u201cOutput a name of a fruit, chosen completely at random.\u201d 4. DAYS AND DATES: \u201cProvide a random date in June.\u201d 5. NUMBERS: \u201cRandomly pick a prime number between 1 and 50.\u201d 6. OCCUPATIONS: \u201cOutput an occupation that starts with the letter \u201cA\u201d.\u201d By fine-tuning on five out of the six tasks listed above and evaluating performance on the sixth, we can measure the ability of our method to handle out-of-distribution tasks. In Figure 3, we report in-distribution (ID) and out-of-distribution (OOD) results, which correspond to tuning sets that include or exclude the particular task, respectively. The results show a convincing trend: for all three models, our tuning method led to substantial improvements in entropy over the baselines, even when the task was held out from the tuning set.9 Another interesting obser9Coverage results show similar trends, and we report them in Table 5, Appendix B.2. 6 Gemma Llama-2 Mistral Ideal 0 2 4 6 Entropy Baby Names Gemma Llama-2 Mistral Ideal 0 1 2 3 Entropy Countries Gemma Llama-2 Mistral 0 1 2 3 4 5 Entropy Fruits Gemma Llama-2 Mistral Ideal 0 1 2 3 Entropy Dates Gemma Llama-2 Mistral Ideal 0 1 2 3 Entropy Numbers Gemma Llama-2 Mistral 0 1 2 3 4 5 Entropy Occupations Baseline Ours (OOD) Ours (ID) Ideal Figure 3: Entropy in leave-one-out generalization. The title of each plot indicates which set of tasks we compute entropy over. vation is that the baseline Mistral model consistently produces more diffuse distributions than the baseline Gemma and Llama-2 models, but after tuning, all three models have comparable entropy. In two of the tasks (COUNTRIES and FRUITS), we observe sizable generalization gaps between indistribution and out-of-distribution entropy, which suggest that task-specific tuning remains useful, especially when we can come up with a reasonbly diverse set of generation targets for finetuning. However, coming up with a large enough target set isn\u2019t always easy. In these cases, we rely on the generalization of the models trained on a diverse set of tasks. For example, in OCCUPATIONS, the authors could only come up with a small set of 17 professions that start with the letter \u201cA,\u201d and Llama-2 and Mistral (not trained on OCCUPATIONS) generalize out-of-distribution to occupations beyond what we provide in the fine-tuning set.10 Notably, our fine-tuning method does not substantially change the general capabilities (e.g., writing and reasoning) of the models demonstrated by evaluations on MT-Bench (see Appendix B.5), making our method compatible with tuning general-purpose large language models. 6 Constructing More Diverse Synthetic Datasets The leave-one-out experiments show that fine-tuning for diffuse outputs leads to task generalization. This is an important trait for real-world applications such as synthetic dataset construction, where it might be not be feasible to tune on data that is identically formatted to what we would like to synthesize. In this section, we evaluate how well a model tuned on the six tasks from Section 5 performs on a realistic dataset creation task\u2014building a synthetic dataset of fictional biographies. Inspired by the synthetic datasets created in Maini et al. (2024) and Yuan et al. (2021), these biographies include the following attributes: first and last name, gender, birth year, birth place, profession and a description of the person\u2019s achievements. We show how our method results in biographical details that are much more diverse than those generated by the baseline models. 10For example, Llama-2 generated \u201cAromatherapist\u201d, and Mistral generated \u201cAgronomist.\u201d Neither is among the fine-tuning targets of OCCUPATIONS. 7 Figure 4: Fine-tuned Llama-2 model improves the diversity of synthetic biographies. We report coverage for categorical attributes in 4a and normalized unigram diversity of generated achievements and the entire biography in 4b. First name Last name Birth year Birth place Career Baseline Tuned-OOD Tuned-ID Baseline Tuned-OOD Tuned-ID Baseline Tuned-OOD Tuned-ID Baseline Tuned-OOD Tuned-ID Baseline Tuned-OOD Tuned-ID 0 200 400 600 0 200 400 600 800 0 20 40 60 80 0 200 400 0 100 200 (a) Coverage results for categorical attributes. Achievements Bio Baseline Tuned-ID Tuned-OOD Baseline Tuned-ID Tuned-OOD 0.0 0.1 0.2 0.0 0.1 0.2 (b) Unigram diversity of biographies. We first consider both a Llama-2-13B model fine-tuned on all tasks in Section 3 and not specifically for biographies (Tuned-OOD), and compare its generations against the baseline Llama-2 model over 1000 samples. In Figure 4, we report coverage results for categorical attributes (e.g., Birth year), and normalized unigram diversity for achievements and over the entire biography.11 In Table 1, we report the most frequently generated values for categorical attributes, along with the frequencies out of 1000 generations.12 The results on the baseline Llama-2 model indicate that the biases we observe in Section 4.1 towards certain names and numbers expectedly show up in generated data. For example, out of 1000 biographies generated by the baseline Llama-2 model, 284 have the first name \u2019Evelyn,\u2019 and 966 are females. Such a high level of repetition in the generation makes the resulting dataset basically unusable for any downstream task without substantial human intervention. In constrast, our model significantly improves the diversity of generated biographies, despite not trained specifically for generating biographies. For example, we see an >2X increase in coverage for most categorical attributes. There are also substantial improvements in generation diversity for achievements and over the entire biography (Figure 4b), although our training did not optimize for open-ended text generation. The top five most frequent values for attributes (Table 1) help contextualize this improvements in diversity: there are significant reductions in biases towards certain attribute values. E.g., the frequency of the name \u201cEvelyn\u201d decreased by 25X, and the birth year 1985 by over 3X. 6.1 Controlling distributions of categorical attributes Despite this improvement over base model, certain biases, as seen in the high frequencies of female biographies (76.2%) and the birth year 1985 (21.1%), still persist, which could limit the utility of the dataset. Our fine-tuning method can in fact be directly applied to balance the distribution of categorical attributes. As a proof of concept, we create a target set containing 210 programmatically generated tuples of categorical attributes, with roughly balanced gender, birth year and birth place distributions without the open-ended achievement descriptions. We then fine-tune a Llama-2 model (Tuned-ID) only on categorical attributes and evaluate another sample of 1000 biographies. At a comparable level of generation diversity to the Tuned-OOD model (Figure 4), the Tuned-ID model is able to generate biographies with much more balanced distributions of categorical attributes: Table 1 shows that both gender are birth years are roughly uniformly distributed, and 11We report prompt used for biography generation and side-by-side qualitative examples in Appendix A 12A table of coverage and entropy statistics of generated biographies can be found in Appendix B.3. 8 Table 1: The most frequently generated values for each attribute, along with the number of times the value was generated (out of 1000 generations). Despite being tuned on out-of-domain tasks, the tuned LLaMA significantly improves diversity. Some values in table have been truncated for brevity. First name Last name Gender Birth year Birth place Career Baseline Llama-2 Evelyn 284 Nightingale 117 F 966 1985 764 Paris, FR 305 Astronaut 211 Luna 155 Aurora 104 NB 17 1987 46 Tokyo, JP 267 Astro. Engineer 66 Elara 87 Nova 98 M 13 1992 44 Stockholm, SE 33 Aero. Engineer 55 Adriana 42 Starling 53 1978 36 Mumbai, IN 32 Env. Activist 42 Aurora 38 Stardust 41 1975 31 Singapore, SG 32 Astrophysicist 32 Fine-tuned Llama-2 (OOD) Luna 32 Nightingale 16 F 762 1985 211 Mumbai, IN 35 Astronaut 96 Zelda 14 Nightshade 12 M 189 1992 99 Lagos, NG 31 Aero. Engineer 50 Mila 14 Chen 8 NB 31 1987 77 Paris, FR 29 Soft. Engineer 47 Evelyn 11 Orion 6 1988 61 Tokyo, JP 27 Env. Activist 35 Althea 9 Sparks 6 1990 52 Nairobi, KE 21 Journalist 34 Fine-tuned Llama-2 (ID) Hava 14 Kim 17 F 487 1921 39 Choloma, HN 16 Architect 140 Maria 14 Mohammed 16 M 478 1931 36 Rabat, MA 13 Journalist 74 Juan 14 Khan 13 NB 34 1942 35 Tainan, TW 10 Politician 35 Valcin 13 Abed 13 1916 30 Rajshahi, BD 10 Archaeologist 35 Issaka 12 Salah 12 1984 29 Budapest, HU 10 Mar. Biologist 26 a wider range of birth places are produced by the model.13 Crucially, the model remains highly diverse on open-ended generation of achievements even when being trained exclusively on categorical attributes. This result highlights the potential in extending our fine-tuning method towards improving diversity in open-ended text generation. 7 Related Work Diversity in Text Generation The lack of diversity has been a long-standing issue in generation (Tevet & Berant, 2021) due to the tension between generation quality and diversity (Zhang et al., 2020): sampling at low temperature causes boring and repetitive text, while sampling at higher temperatures could lead to nonsensical output. Much of the existing literature approaches the problem by coming up with new decoding strategies, which often involve either shaping the model distribution (e.g., top-p sampling (Holtzman et al., 2020) and top-k sampling (Fan et al., 2018)) and diversity-promoting constraints during decoding (Li et al., 2016; Vijayakumar et al., 2018) or training (Welleck et al., 2019; Edunov et al., 2018). Our setting is distinct from prior work in that we already assume a \u201cmaximally\u201d diverse decoding strategy (i.e., sampling from the model with temperature one), and language models still fail to produce diverse outputs. Language Model for Dataset Creation As more capable language models emerge (Touvron et al., 2023; Jiang et al., 2023; Google, 2024), language model-based dataset creation becomes increasingly practical. Prior work in this area has largely focused on generating specialized data for augmenting NLP tasks (Ye et al., 2022b) including semantic similarity (Schick & Sch\u00a8 utze, 2021), relationship extraction (Chia et al., 2022), natural language understanding (Meng et al., 2022) and instruction following (Honovich et al., 2022). A LM-based dataset creation pipeline is usually an iterative and arduous process (Ye et al., 2022a), in which significant human intervention is needed to ensure the model generates diverse and high-quality data (Yuan et al., 2021; Liu et al., 2022; Maini et al., 2024). Our work aims to make progress towards the goal of automating data creation by improving the diversity of language model generation and thereby reducing the need for human intervention. 13See maps of generated birth places for all three models in Figure 5, Appendix B. 9 8 Concluding Discussion and Future Directions In this work, we propose a method for fine-tuning language models to generate diffuse probability distributions, and show that this method leads to sizable and transferrable improvements in generation diversity. We showcase a practical application of our method in synthetic dataset generation, demonstrating improvements in the quality of generated data by a large margin, with or without task-specific tuning. Our experiments reveal interesting insights on the surprising capability of language models to learn diffuse distributions and generalize to new prompts and output spaces. An important direction for future work is the application of our method (and distribution matching techniques in general) to debiasing language models (Liang et al., 2021), which are shown to be rife with harmful stereotypes (Bolukbasi et al., 2016). Given the strong generalization properties of our method, it is plausible that aligning language models with an ideal distribution on representative instances could lead to generally less biased models. Future work should further explore the limits of this generalization, especially in the context of improving diversity of open-ended generation, where the output spaces are much less structured. Although our method is independent from the fine-tuning procedure, we find LoRA (Hu et al., 2021) to be substantially more efficient and effective compared to other techniques such as prefixtuning (Li & Liang, 2021) and discrete prompt search (Zou et al., 2023). Since they could potentially guide closed-source models to generate more diverse outputs, identifying diversity-inducing discrete prompts is a particularly interesting research question, which we leave for future work." + }, + { + "url": "http://arxiv.org/abs/2404.10335v2", + "title": "Efficiently Adversarial Examples Generation for Visual-Language Models under Targeted Transfer Scenarios using Diffusion Models", + "abstract": "Targeted transfer-based attacks involving adversarial examples pose a\nsignificant threat to large visual-language models (VLMs). However, the\nstate-of-the-art (SOTA) transfer-based attacks incur high costs due to\nexcessive iteration counts. Furthermore, the generated adversarial examples\nexhibit pronounced adversarial noise and demonstrate limited efficacy in\nevading defense methods such as DiffPure. To address these issues, inspired by\nscore matching, we introduce AdvDiffVLM, which utilizes diffusion models to\ngenerate natural, unrestricted adversarial examples. Specifically, AdvDiffVLM\nemploys Adaptive Ensemble Gradient Estimation to modify the score during the\ndiffusion model's reverse generation process, ensuring the adversarial examples\nproduced contain natural adversarial semantics and thus possess enhanced\ntransferability. Simultaneously, to enhance the quality of adversarial examples\nfurther, we employ the GradCAM-guided Mask method to disperse adversarial\nsemantics throughout the image, rather than concentrating them in a specific\narea. Experimental results demonstrate that our method achieves a speedup\nranging from 10X to 30X compared to existing transfer-based attack methods,\nwhile maintaining superior quality of adversarial examples. Additionally, the\ngenerated adversarial examples possess strong transferability and exhibit\nincreased robustness against adversarial defense methods. Notably, AdvDiffVLM\ncan successfully attack commercial VLMs, including GPT-4V, in a black-box\nmanner.", + "authors": "Qi Guo, Shanmin Pang, Xiaojun Jia, Qing Guo", + "published": "2024-04-16", + "updated": "2024-04-18", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Large VLMs have demonstrated significant success in tasks such as image-to-text generation [12\u201314] and text-to-image generation [2, 21]. Particularly in image-to-text generation, users can use images to generate executable commands for robot control, with potential applications in autonomous driving systems, assistance systems for the visually impaired, and content moderation systems. Errors in these applications can lead to severe security risks, jeopardizing the lives and property of individuals. Consequently, it is crucial to assess the adversarial robustness of these VLMs. Recent studies [1, 11] have explored the adversarial robustness of VLMs, primarily focusing on untargeted and white-box scenar- ios. However, the more realistic settings of black-box and targeted scenarios have not received adequate attention. AttackVLM [31] em- ploys a query-based attack method and incorporates transfer-based priors to prompt black-box VLMs to produce targeted responses. However, due to the substantial number of queries required, this process is time-consuming, typically requiring several hours to generate an adversarial example. Consequently, we consider an al- ternative black-box attack method, namely, transfer-based attacks. As shown in Figure 1, the current SOTA transfer-based attacks are also slow in generating adversarial examples and less effective in evading adversarial defense methods. Additionally, the adversarial examples generated by these methods exhibit significant noise. To address these problems, inspired by score matching [26] and unrestricted adversarial examples [25], we propose AdvDiffVLM that uses diffusion models to generate natural unrestricted adver- sarial examples. Specifically, we leverage and modify the reverse generation process of the pre-trained diffusion models, where we utilize Adaptive Ensemble Gradient Estimation to change score and embed target semantics into adversarial examples. To enhance the naturalness of the output, we introduce the GradCAM-guided Mask, which disperses the adversarial target semantics across adversar- ial examples, preventing the model from generating adversarial examples in specific areas and thereby improving image quality. Our method requires only a few steps of backward denoising to generate adversarial examples, making it significantly faster than current transfer-based methods. Furthermore, AdvDiffVLM gen- erates adversarial examples through denoising, exhibiting greater robustness to defense methods. We summarize our contributions as follows: 1) We provide a comprehensive evaluation of the robustness of VLMs against the SOTA transfer-based attacks in targeted and transfer scenarios; 2) We introduce AdvDiffVLM, which utilizes Adaptive Ensemble Gradient Estimation and the GradCAM-guided Mask to produce natural, unrestricted adversarial examples; 3) Experimental results demonstrate that AdvDiffVLM achieves speedups of an order of magnitude or greater over previously published attackers, while delivering adversarial examples with superior image quality. More- over, these adversarial examples exhibit robust transferability across arXiv:2404.10335v2 [cs.CV] 18 Apr 2024 Figure 1: Comparison of different transfer-based attacks and our method on VLMs. (a) Comparison of attack performance. We select BLIP2 [12] as the representation model of VLMs. We report the CLIP\ud835\udc61\ud835\udc4e\ud835\udc5fscore, which is the similarity between the response generated by the input images (Adversarial Examples or Purified Examples) and the pre-defined adversarial target texts (see Appendix 4.3 for specific calculation methods). Purified Examples represent the adversarial examples purified by DiffPure [19]. (b) Comparison of image quality. We enlarge the local area of the adversarial examples to enhance visual effects. It is evident that adversarial examples generated by transfer-based attacks exhibit notable noise. Our method has better visual effects. Magnify images for improved contrast. VLMs and significant robustness against defense methods; 4) Our method can successfully attack commercial VLMs such as GPT-4V in black-box scenarios (https://chat.openai.com/).", + "main_content": "Adversarial attack methods are categorized into white-box and black-box attacks based on adversary knowledge, and into targeted and untargeted attacks based on adversary goals. There have been studies examining the robustness of VLMs, specifically addressing adversarial challenges in visual question answering [31] and image captioning [1]. However, most investigations focus on traditional CNN-RNN-based models, with assumptions of either white-box access or untargeted goals, limiting their applicability in real-world scenarios. Recently, AttackVLM [31] implemented both transferbased and query-based attacks on large open-source VLMs, under the assumption of black-box access and targeted goals. Nevertheless, this approach is time-intensive, owing to its dependence on numerous VLM queries. Additionally, [6] investigated VLM adversarial robustness via ensemble transfer-based attacks, albeit assuming untargeted goals. In this study, we explore the adversarial robustness of VLMs under targeted transfer-based attacks. Initially, we assess VLMs robustness against current SOTA transfer-based attacks, in conjunction with AttackVLM. Subsequently, we analyze the limitations of current methods and implement targeted improvements, culminating in the proposal of AdvDiffVLM. 2.2 Unrestricted Adversarial Examples Due to the inadequacy of the \ud835\udc59\ud835\udc5dnorm distance in capturing human perception, there has been increasing interest among researchers in unrestricted adversarial examples in recent years. Some approaches employ generative methods to generate unrestricted adversarial examples. For instance, [25, 32] perturbed the latent representation of GANs to generate unrestricted adversarial examples. However, due to the limited interpretability of GANs, the generated adversarial examples exhibit poor quality. Diffusion models [10] are SOTA generative models based on likelihood with theoretical foundations, sampling data distribution with high fidelity and diversity. AdvDiffuser [5] integrated the PGD [17] method into the reverse process of the diffusion model, yielding high-quality unrestricted adversarial examples. In this study, we explore using the diffusion model for generating unrestricted adversarial examples, focusing on modifying the score in the diffusion model\u2019s inverse process rather than adding noise to the latent image. More details for diffusion models are described in Appendix 2 and the code is available in (Supplementary Material). Figure 2: The CLIP\ud835\udc56\ud835\udc5a\ud835\udc54score varies with the number of iterations. where CLIP\ud835\udc56\ud835\udc5a\ud835\udc54is the similarity between the adversarial examples and the adversarial target images, which is calculated by the visual encoder of CLIP ViT-B/32. We choose SSA [15] as the representative of transfer-based attacks. 3 PRELIMINARIES 3.1 Problem Settings We denote the victim VLM model as \ud835\udc53\ud835\udf09, and aim to induce \ud835\udc53\ud835\udf09to output the target response. This can be formalized as max \ud835\udc36\ud835\udc46(\ud835\udc54\ud835\udf13(\ud835\udc53\ud835\udf09(\ud835\udc99adv; \ud835\udc84in)),\ud835\udc54\ud835\udf13(\ud835\udc84tar)) s.t. \ud835\udc37(\ud835\udc99, \ud835\udc99adv) \u2264\ud835\udeff (1) where \ud835\udc99\u2208R3\u00d7\ud835\udc3b\u00d7\ud835\udc4arepresents the original image, \ud835\udc99adv and \ud835\udc84tar respectively refer to adversarial example and adversarial target text, and \ud835\udc54\ud835\udf13(\u00b7) denotes the CLIP text encoder. Besides, \ud835\udc37(\ud835\udc99, \ud835\udc99adv) \u2264\ud835\udeff places a bound on a distance metric, and \ud835\udc36\ud835\udc46(\u00b7, \u00b7) refers to the cosine similarity metric. Finally, \ud835\udc84in and \ud835\udc84out denote the input text and the output text, respectively. Since \ud835\udc53\ud835\udf09is a black-box model, we generate adversarial examples on the surrogate model \ud835\udf19\ud835\udf13and subsequently transfers them to \ud835\udc53\ud835\udf09. In addition, inspired by [31], matching image-image features can lead to better results, we define the problem as, max \ud835\udc36\ud835\udc46(\ud835\udf19\ud835\udf13(\ud835\udc99adv),\ud835\udf19\ud835\udf13(\ud835\udc99tar)) s.t. \ud835\udc37(\ud835\udc99, \ud835\udc99adv) \u2264\ud835\udeff (2) where \ud835\udc99tar represents the target image generated by \ud835\udc84tar. We use stable diffusion [21] to implement the text-to-image generation. \ud835\udf19\ud835\udf13 refers to CLIP image encoder. Our study is the most realistic and challenging attack scenarios, i.e., targeted and transfer scenarios. 3.2 Rethinking Transfer-based Attacks Transfer-based attacks can effectively solve Eq.2. In this context, we assess the robustness of VLMs against current SOTA transferbased attacks, in conjunction with AttackVLM. Specifically, we consider ensemble attacks such as Ens[7], SVRE [29], and CWA [4], as well as data augmentation attacks like SSA [15] and SIA [27], and combinations of these techniques. We primarily employ the simple ensemble version of data augmentation attacks, as relying on a single surrogate model tends to yield poor performance. A detailed introduction to these methods, along with their hyper-parameter settings, is provided in Appendix 4.2. The outcomes of these transfer-based attacks on VLMs are depicted in Figure 1. As illustrated in Figure 1, current transfer-based attacks face challenges such as slow adversarial example generation, noticeable noise within these examples, and limited capability to evade adversarial defense methods. The limitations of existing transfer-based attacks on VLMs are analyzed as follows: First, existing SOTA transfer-based attacks only access the original image during the optimization of Eq.2. Consequently, they employ small steps and strategies like data augmentation to tentatively approach the optimal solution, necessitating numerous iterations and resulting in high attack costs. As shown in Figure 2, using a larger step size results in pronounced fluctuations during the optimization process. This issue may be mitigated by leveraging score, which provides insights into the data distribution. By offering score guidance towards solving Eq.2, quicker convergence is expected. Therefore score information can be considered in the design of new improved attack method. Second, existing transfer-based attacks introduce high-frequency additive noise, which can be readily countered by adversarial defense methods. Unrestricted adversarial examples [23] have proven effective at bypassing defense methods, suggesting that new transfer-based attacks could adopt this approach. 4 METHODOLOGY In this section, we present adversarial attacks from the perspective of score matching and then offer a comprehensive description of the proposed AdvDiffVLM. Finally, we delineate the distinctions between our method and AdvDiffuser. The complete workflow chart of AdvDiffVLM is illustrated in Figure 3. 4.1 Theoretical Background We are focused on modeling adversarial attacks from a generative perspective, considering how to utilize the data distribution (score) of the generative model to generate natural, unrestricted adversarial examples. Additionally, as indicated in [16], learning to model the score function is equivalent to modeling the negative of the noise, suggesting that score matching and denoising are equivalent processes. Thus, our method derives from integrating diffusion models and score matching, positioning it as a novel approach for generating high-quality, unrestricted adversarial examples. Formally, we want to obtain distribution meeting the condition that the adversarial example has target semantic information during the reverse generation process \ud835\udc5d(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61, \ud835\udc53\ud835\udf09(\ud835\udc99adv; \ud835\udc84in) = \ud835\udc84tar) (3) where \ud835\udc65\ud835\udc61represents the latent image of the diffusion model. To the end, we start from the perspective of score matching [26] and consider the score \u2207log\ud835\udc5d(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61, \ud835\udc84tar) of this distribution, where \u2207is the abbreviation for \u2207\ud835\udc65\ud835\udc61. According to Bayes theorem (see Figure 3: An overview of the AdvDiffVLM for efficiently generating transferable unrestricted adversarial examples. AdvDiffVLM mainly includes two components: Adaptive Ensemble Gradient Estimation and GradCAM-guided Mask. Details are respectively described in Secs. 4.2 and 4.3. Please refer to Section 4 for specific symbol meanings. Appendix 3.1 for detailed explanations), \u2207log\ud835\udc5d(\ud835\udc65\ud835\udc61\u22121 | \ud835\udc65\ud835\udc61, \ud835\udc84tar) = \u2207log \u0010 \ud835\udc5d(\ud835\udc84tar|\ud835\udc65\ud835\udc61\u22121,\ud835\udc65\ud835\udc61)\u00b7\ud835\udc5d(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61) \ud835\udc5d(\ud835\udc84tar|\ud835\udc65\ud835\udc61) \u0011 = \u2207log\ud835\udc5d(\ud835\udc84tar | \ud835\udc65\ud835\udc61\u22121,\ud835\udc65\ud835\udc61) + \u2207log\ud835\udc5d(\ud835\udc65\ud835\udc61\u22121 | \ud835\udc65\ud835\udc61) \u2212\u2207log\ud835\udc5d(\ud835\udc84tar | \ud835\udc65\ud835\udc61) = \u2207log\ud835\udc5d(\ud835\udc84tar | \ud835\udc65\ud835\udc61\u22121) + \u2207log\ud835\udc5d(\ud835\udc65\ud835\udc61| \ud835\udc65\ud835\udc61\u22121, \ud835\udc84tar) \u2212\u2207log\ud835\udc5d(\ud835\udc65\ud835\udc61| \ud835\udc65\ud835\udc61\u22121) + \u2207log\ud835\udc5d(\ud835\udc65\ud835\udc61\u22121 | \ud835\udc65\ud835\udc61) \u2212\u2207log\ud835\udc5d(\ud835\udc84tar | \ud835\udc65\ud835\udc61) = \u2207log\ud835\udc5d(\ud835\udc65\ud835\udc61| \ud835\udc65\ud835\udc61\u22121, \ud835\udc84tar) \u2212\u2207log\ud835\udc5d(\ud835\udc65\ud835\udc61| \ud835\udc65\ud835\udc61\u22121) +\u2207log\ud835\udc5d(\ud835\udc65\ud835\udc61\u22121 | \ud835\udc65\ud835\udc61) \u2212\u2207log\ud835\udc5d(\ud835\udc84tar | \ud835\udc65\ud835\udc61) (4) \ud835\udc5d(\ud835\udc65\ud835\udc61| \ud835\udc65\ud835\udc61\u22121,\ud835\udc50\ud835\udc61\ud835\udc4e\ud835\udc5f) and\ud835\udc5d(\ud835\udc65\ud835\udc61| \ud835\udc65\ud835\udc61\u22121) respectively denote the add noise process with target text and the add noise process devoid of target semantics. From an intuitive standpoint, whether target text is present or not, the forward noise addition process follows a Gaussian distribution, and the added noise remains consistent, indicating that the gradient solely depends on \ud835\udc65\ud835\udc61. The difference between \ud835\udc65\ud835\udc61 without target text and \ud835\udc65\ud835\udc61with target text is minimal, as constraints are employed to ensure minimal variation of the adversarial sample from the original sample. Therefore, \u2207log\ud835\udc5d(\ud835\udc65\ud835\udc61| \ud835\udc65\ud835\udc61\u22121, \ud835\udc84tar) and \u2207log\ud835\udc5d(\ud835\udc65\ud835\udc61| \ud835\udc65\ud835\udc61\u22121) are approximately equal. So the final score is \u2207log\ud835\udc5d(\ud835\udc65\ud835\udc61\u22121 | \ud835\udc65\ud835\udc61) \u2212\u2207log\ud835\udc5d(\ud835\udc84tar | \ud835\udc65\ud835\udc61). Because score matching and denoising are equivalent processes, that is, \u2207log\ud835\udc5d(\ud835\udc65\ud835\udc61) = \u2212 1 \u221a1\u2212\u00af \ud835\udefc\ud835\udc61\ud835\udf160. Therefore we can get score (\u2207log\ud835\udc5d(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61, \ud835\udc84tar)), score = \u2212( \ud835\udf00\ud835\udf03(\ud835\udc65\ud835\udc61) \u221a1 \u2212\u00af \ud835\udefc\ud835\udc61 + \u2207log\ud835\udc5d\ud835\udc53\ud835\udf09(\ud835\udc50tar | \ud835\udc65\ud835\udc61)) (5) Where \ud835\udf00\ud835\udf03represents the noise predictor of the diffusion model, and \u00af \ud835\udefc\ud835\udc61represents the hyperparameters in the diffusion model. Eq.5 demonstrates that the score of \ud835\udc5d(\ud835\udc65\ud835\udc61\u22121|\ud835\udc65\ud835\udc61, \ud835\udc84tar) can be derived by incorporating gradient information into the inverse process of the diffusion model. Consequently, adversarial semantics can be incrementally embedded into adversarial examples based on the principle of score matching. 4.2 Adaptive Ensemble Gradient Estimation Since \ud835\udc53\ud835\udf09is a black-box model and cannot obtain gradient information, we use surrogate model to estimate \u2207log\ud835\udc5d\ud835\udc53\ud835\udf09(\ud835\udc50tar | \ud835\udc65\ud835\udc61). As a scalable method for learning joint representations between text and images, CLIP [20] can leverage pre-trained CLIP models to establish a bridge between images and text. Therefore we use the CLIP model as the surrogate model to estimate the gradient. Specifically, we first add noise to the original image \ud835\udc99by \ud835\udc61\u2217steps through the forward process \ud835\udc5e(\ud835\udc65\ud835\udc61\u2217|\ud835\udc650) to obtain \ud835\udc65\ud835\udc61\u2217, where \ud835\udc650 = \ud835\udc99. Then, at each step of reverse process, we change score: score = \u2212( 1 \u221a1\u2212\u00af \ud835\udefc\ud835\udc61\ud835\udf00\ud835\udf03( \u02dc \ud835\udc65\ud835\udc61) + \ud835\udc60\u2207\u02dc \ud835\udc65\ud835\udc61(\ud835\udc36\ud835\udc46(\ud835\udf19\ud835\udf13( \u02dc \ud835\udc65\ud835\udc61),\ud835\udf19\ud835\udf13(\ud835\udc99tar)))) (6) where \ud835\udc60is the adversarial gradient scale used to control the degree of score change and \u02dc \ud835\udc65\ud835\udc61is the latent image in the inverse process. We find that gradient estimation using only a single surrogate model is inaccurate. Therefore, we consider using a set of surrogate models \b \ud835\udf19\ud835\udc56 \u03a8 \t\ud835\udc41\ud835\udc5a \ud835\udc56=1 to better estimate the gradient. Specifically, we make the following improvements to Eq. 6: score = \u2212( \ud835\udf00\ud835\udf03( \u02dc \ud835\udc65\ud835\udc61) \u221a1\u2212\u00af \ud835\udefc\ud835\udc61+ \ud835\udc60\u2207\u02dc \ud835\udc65\ud835\udc61(\ud835\udc64\ud835\udc56 \u00cd\ud835\udc41\ud835\udc5a \ud835\udc56=1 \ud835\udc36\ud835\udc46(\ud835\udf19\ud835\udc56 \ud835\udf13( \u02dc \ud835\udc65\ud835\udc61),\ud835\udf19\ud835\udc56 \ud835\udf13(\ud835\udc99tar)))) (7) where w = (\ud835\udc641,\ud835\udc642, \u00b7 \u00b7 \u00b7 ,\ud835\udc64\ud835\udc41\ud835\udc5a) represents the weight of cosine loss of different models. Since different images have different sensitivities to surrogate models, only using simple ensemble cannot obtain optimal solution. Inspired by [3] (see Appendix 3.2), we propose a new adaptive ensemble method, and obtain w in Eq. 7 in the following way: \ud835\udc64\ud835\udc56(\ud835\udc61) = \u00cd\ud835\udc41\ud835\udc5a \ud835\udc57=1 exp(\ud835\udf0fL\ud835\udc57(\ud835\udc61+ 1)/L\ud835\udc57(\ud835\udc61+ 2)) \ud835\udc41\ud835\udc5aexp(\ud835\udf0fL\ud835\udc56(\ud835\udc61+ 1)/L\ud835\udc56(\ud835\udc61+ 2)) (8) where \ud835\udf0frefers to the temperature. A larger \ud835\udf0fmakes all weights close to 1. L\ud835\udc56= \ud835\udc36\ud835\udc46(\ud835\udf19\ud835\udc56 \ud835\udf13( \u02dc \ud835\udc65\ud835\udc61),\ud835\udf19\ud835\udc56 \ud835\udf13(\ud835\udc99tar)). We initialize {\ud835\udc64\ud835\udc56(\ud835\udc61\u2217)}\ud835\udc41\ud835\udc5a \ud835\udc56=1 and {\ud835\udc64\ud835\udc56(\ud835\udc61\u2217\u22121)}\ud835\udc41\ud835\udc5a \ud835\udc56=1 to 1. Through Eq. 8, we reduce the weight of surrogate models with fast-changing losses to ensure that gradient estimations of different surrogate models are updated simultaneously. Algorithm 1 The overvall algorithm of AdvDiffVLM 1: Input: original image \ud835\udc99, \ud835\udc41\ud835\udc5asurrogate models \ud835\udf19\ud835\udc56 \ud835\udf03, adversarial guidance scale \ud835\udc60, reverse generation process timestep \ud835\udc61\u2217, mask area size k, perturbation threshold \ud835\udf16, temperature \ud835\udf0f, adversarial target image \ud835\udc99tar. 2: Output: adversarial example \ud835\udc99adv 3: Initialize {\ud835\udc64\ud835\udc56}\ud835\udc41\ud835\udc5a \ud835\udc56=1 = 1, \ud835\udc36\ud835\udc34\ud835\udc40, \ud835\udc650 = \ud835\udc99; 4: Sample \ud835\udc65\ud835\udc61\u2217\u223c\ud835\udc5e(\ud835\udc65\ud835\udc61\u2217|\ud835\udc650), let \u02dc \ud835\udc65\ud835\udc61\u2217= \u00af \ud835\udc65\ud835\udc61\u2217= \ud835\udc65\ud835\udc61\u2217; 5: for \ud835\udc61\u2190\ud835\udc61\u2217, \u00b7 \u00b7 \u00b7 , 1 do 6: Get mask m according to \ud835\udc36\ud835\udc34\ud835\udc40; 7: \ud835\udc65\ud835\udc61\u223c\ud835\udc5e(\ud835\udc65\ud835\udc61|\ud835\udc650); 8: \u02c6 \ud835\udc65\ud835\udc61= m \u2299\ud835\udc65\ud835\udc61+ (1 \u2212m) \u2299\u02dc \ud835\udc65\ud835\udc61; 9: \ud835\udc64\ud835\udc56= \u00cd\ud835\udc41\ud835\udc5a \ud835\udc57=1 exp(\ud835\udf0fL\ud835\udc57(\ud835\udc61+ 1)/L\ud835\udc57(\ud835\udc61+ 2)) \ud835\udc41\ud835\udc5aexp(\ud835\udf0fL\ud835\udc56(\ud835\udc61+ 1)/L\ud835\udc56(\ud835\udc61+ 2)) ; 10: \ud835\udc54= \u2207\u02dc \ud835\udc65\ud835\udc61(\ud835\udc64\ud835\udc56 \u00cd\ud835\udc41\ud835\udc5a \ud835\udc56=1 \ud835\udc36\ud835\udc46(\ud835\udf19\ud835\udc56 \ud835\udf13( \u02dc \ud835\udc65\ud835\udc61),\ud835\udf19\ud835\udc56 \ud835\udf13(\ud835\udc99tar))); 11: \ud835\udc54= \ud835\udc50\ud835\udc59\ud835\udc56\ud835\udc5d(\ud835\udc54, \u2212\ud835\udeff,\ud835\udeff); 12: score = \ud835\udf00\ud835\udf03( \u02dc \ud835\udc65\ud835\udc61) /\u221a1 \u2212\u00af \ud835\udefc\ud835\udc61+ \ud835\udc60\u00b7 \ud835\udc54; 13: \u02dc \ud835\udc65\ud835\udc61\u22121 = \u2212\u221a1 \u2212\u00af \ud835\udefc\ud835\udc61\u00d7 score; 14: end for 15: Return: \ud835\udc99adv = \u02dc \ud835\udc650 Finally, we set the perturbation threshold \ud835\udeff, and then clip the adversarial gradient to ensure the naturalness of the synthesized adversarial examples. 4.3 GradCAM-guided Mask Generation We detail Adaptive Ensemble Gradient Estimation in the previous section. However, we note that only relying on adaptive ensemble gradient estimation leads to the generation of obvious adversarial features in specific areas, resulting in poor visual effects. To achieve a balance between the natural visual effects and attack capabilities of adversarial examples, we introduce GradCAM-guided Mask. This method utilizes a mask to combine the forward noisy image \ud835\udc65\ud835\udc61and the generated image \u02dc \ud835\udc65\ud835\udc61. Through the combination, the adversarial semantics concentrated in the adversarial examples across the entire image is distributed, thereby enhancing natural visual effect of the adversarial examples. We present the visualization results before and after adding GradCAM-guided Mask in Appendix 5.1. First, we utilize GradCAM [22] to derive the class activation map \ud835\udc36\ud835\udc34\ud835\udc40of \ud835\udc99with respect to ground-truth label \ud835\udc9a. \ud835\udc36\ud835\udc34\ud835\udc40assists in identifying important and non-important areas in the image. Subsequently, we clip the \ud835\udc36\ud835\udc34\ud835\udc40values to the range [0.3, 0.7] and normalize them to obtain the probability matrix P. We sample according to the P to obtain the coordinate (x, y), and then set the k \u00d7 k area around (x, y) to be 1 and the remaining areas to be 0 to obtain mask m. Here, m has the same shape as \u02dc \ud835\udc65\ud835\udc61. This approach disperses more adversarial features in non-important areas and less in important areas of adversarial examples, improving the natural visual effect of adversarial examples. At each denoising step \ud835\udc61, we combine \ud835\udc65\ud835\udc61and \u02dc \ud835\udc65\ud835\udc61as the following: \u02c6 \ud835\udc65\ud835\udc61= m \u2299\ud835\udc65\ud835\udc61+ (1 \u2212m) \u2299\u02dc \ud835\udc65\ud835\udc61 (9) where \u2299refers to Hadamard Product. Afterwards, we can obtain new score by integrating \ud835\udf00\ud835\udf03( \u02c6 \ud835\udc65\ud835\udc61) with the estimated gradient information and then use \u02dc \ud835\udc65\ud835\udc61\u22121 = \u2212\u221a1 \u2212\u00af \ud835\udefc\ud835\udc61\u00d7 score for sampling. We provide a complete algorithmic overview of AdvDiffVLM in Algorithm 1. Finally, we find that new adversarial examples obtained by taking the generated adversarial examples as \ud835\udc650, and then iterating \ud835\udc41times in this way has stronger transferability as well as greater robustness against adversarial defense methods. We refer to this approach as AdvDiffVLM+ and set \ud835\udc41= 3. 4.4 Differences From AdvDiffuser Both our method and AdvDiffuser [5] produce unrestricted adversarial examples using the diffusion model. Here, we discuss the distinctions between them, highlighting our contributions. Tasks of varying difficulty levels: AdvDiffuser is oriented towards classification models, while our research targets the more intricate Vision-Language Models (VLMs). Initially, within the realm of classification tasks, each image is associated with a singular label. Conversely, in the image-to-text tasks, images may be linked to numerous text descriptions. When faced with an attack targeting a singular description, VLMs have the capability to generate an alternate description, thereby neutralizing the attack\u2019s effectiveness. As a result, our task presents a greater challenge. Different theoretical foundations: AdvDiffuser posits that PGD can introduce adversarial noise. It begins with Gaussian noise, subsequently incorporating high-frequency adversarial perturbations into the latent image in a sequential manner. Given that the diffusion model\u2019s inverse process inherently constitutes a denoising procedure, it necessitates numerous iterations to introduce sufficient perturbations, leading to heavy computation. In contrast, our method derives from score matching, where we employ CLIP to estimate gradient, subsequently altering the score rather than adding it into latent image. Through score matching, the adversarial gradient can be perfectly integrated into the reverse generation process without being weakened. Furthermore, our approach obviates the need for initiating with Gaussian noise, initially introducing noise to \ud835\udc99through \ud835\udc61\u2217steps, followed by the application of adversarial gradient to modify score, thereby facilitating more efficient generation of adversarial examples. See Appendix 3.3 for visual illustrations. Distinct schemes of GradCAM utilization: The GradCAM mask utilized by AdvDiffuser leads to restricted modification of crucial image areas, rendering it inadequate for image-based attacks. Addressing this issue, we have introduced the GradCAM-guided Mask. Contrary to utilizing GradCAM results directly as a mask, we employ them as a directive to generate the mask further. This not only guarantees a likelihood of modification across all image areas but also secures minimal alteration of significant areas, striking a balance between image quality and attack ability. 5 EXPERIMENTS 5.1 Experimental Setup Datasets and Victim VLMs: Following [6], we use NeurIPS\u201917 adversarial competition dataset, compatible with ImageNet, for all the experiments. Else, we select 1,000 text descriptions from the captions of the MS-COCO dataset as our adversarial target texts and then use Stable Diffusion [21] generate 1,000 adversarial targeted images. For the victim VLMs, SOTA open-source models are evaluated, including MiniGPT-4 [33], LLaVA [14], UniDiffuser [2], Table 1: Comparison with existing SOTA attack methods, where the best result is bolded, and the second-best result is underlined. Note that we use four versions of the CLIP visual encoder, including Resnet50, Resnet101, ViT-B/16 and ViT-B/32, as surrogate models. Besides, AdvDiffVLM\ud835\udc60\ud835\udc56\ud835\udc5b\ud835\udc54\ud835\udc59\ud835\udc52means using a single ViT-B/32 to calculate the loss, AdvDiffVLM\ud835\udc52\ud835\udc5b\ud835\udc60means using a simple ensemble strategy, and AdvDiffVLM\ud835\udc5b\ud835\udc5c\ud835\udc5a\ud835\udc4e\ud835\udc60\ud835\udc58means not using GradCAM-guided Mask. Since Unidiffuser uses ViT-B/32 as the visual encoder, it is a gray box scenario, which we indicate with *. The shaded parts represent our two proposed methods. Unidiffuser* BLIP BLIP2 MiniGPT-4 LLaVA Img2LLM Time(s) CLIP\ud835\udc61\ud835\udc4e\ud835\udc5f\u2191 ASR \u2191 CLIP\ud835\udc61\ud835\udc4e\ud835\udc5f\u2191 ASR \u2191 CLIP\ud835\udc61\ud835\udc4e\ud835\udc5f\u2191 ASR \u2191 CLIP\ud835\udc61\ud835\udc4e\ud835\udc5f\u2191 ASR \u2191 CLIP\ud835\udc61\ud835\udc4e\ud835\udc5f\u2191 ASR \u2191 CLIP\ud835\udc61\ud835\udc4e\ud835\udc5f\u2191 ASR \u2191 Original 0.4770 0% 0.5190 0% 0.4931 0% 0.4902 0% 0.5190 0% 0.5288 0% / Ens 0.7353 99% 0.5322 4% 0.5085 1% 0.4980 2% 0.5366 3% 0.5297 4% 69 SVRE 0.7231 100% 0.5410 5% 0.5190 2% 0.5107 2% 0.5385 4% 0.5292 4% 125 CWA 0.7568 100% 0.5415 9% 0.5249 5% 0.5211 4% 0.5493 7% 0.5346 5% 101 SSA-Ens 0.7275 100% 0.5991 31% 0.5539 9% 0.5175 10% 0.6098 37% 0.5629 19% 879 SSA-SVRE 0.7217 100% 0.6002 34% 0.5776 18% 0.5395 16% 0.6005 40% 0.5625 18% 1012 SSA-CWA 0.7485 100% 0.6074 36% 0.5888 23% 0.5407 20% 0.6152 41% 0.5634 20% 1225 SIA-Ens 0.7377 100% 0.7001 79% 0.5656 50% 0.5305 40% 0.7158 85% 0.6337 27% 483 SIA-SVRE 0.7302 100% 0.7014 81% 0.5802 50% 0.5482 46% 0.7122 88% 0.6305 35% 596 SIA-CWA 0.7498 100% 0.7059 89% 0.5835 56% 0.5510 48% 0.7194 90% 0.6401 40% 732 AdvDiffuser\ud835\udc52\ud835\udc5b\ud835\udc60 0.6774 86% 0.5504 24% 0.5396 8% 0.5371 8% 0.5507 25% 0.5395 11% 574 AdvDiffuser\ud835\udc4e\ud835\udc51\ud835\udc4e\ud835\udc5d\ud835\udc61\ud835\udc56\ud835\udc63\ud835\udc52 0.6932 88% 0.5631 29% 0.5424 10% 0.5391 9% 0.5595 27% 0.5502 14% 602 AdvDiffVLM\ud835\udc60\ud835\udc56\ud835\udc5b\ud835\udc54\ud835\udc59\ud835\udc52 0.6977 95% 0.5322 3% 0.5073 1% 0.5022 2% 0.5332 6% 0.5351 3% 13 AdvDiffVLM\ud835\udc52\ud835\udc5b\ud835\udc60 0.7050 100% 0.6044 46% 0.5708 34% 0.5402 31% 0.6035 53% 0.5847 20% 15 AdvDiffVLM\ud835\udc5b\ud835\udc5c\ud835\udc5a\ud835\udc4e\ud835\udc60\ud835\udc58 0.7416 100% 0.6484 59% 0.6357 55% 0.6019 50% 0.6552 73% 0.6105 32% 14 AdvDiffVLM 0.7329 100% 0.6402 52% 0.6137 50% 0.5814 46% 0.6426 70% 0.6032 28% 15 AdvDiffVLM+ 0.7398 100% 0.6511 61% 0.6314 58% 0.6035 52% 0.6570 77% 0.6338 35% 42 BLIP [13], BLIP2 [12] and Img2LLM [8]. Among them, Unidiffuser is a gray-box model, and the others are black-box models. Baselines: We compare with AdvDiffuser [5] and other SOTA transfer-based attackers described in Section 3.2. Since AdvDiffuser is used for classification models, we use cosine similarity loss instead of classification loss for adversarial attacks on VLMs. For a fair comparison, we implement the ensemble version of AdvDiffuser, including simple ensemble and adaptive ensemble, which are denoted as AdvDiffuser\ud835\udc52\ud835\udc5b\ud835\udc60, AdvDiffuser\ud835\udc4e\ud835\udc51\ud835\udc4e\ud835\udc5d\ud835\udc61\ud835\udc56\ud835\udc63\ud835\udc52respectively. Evaluation Metrics: Following [31], we adopt CLIP score between the generated responses from victim models and predefined targeted texts, as computed by ViT-B/32 text encoder, refered as CLIP\ud835\udc61\ud835\udc4e\ud835\udc5f. We adopt the method of calculating the attack success rate (ASR) in [6], positing that an attack is deemed successful solely if the image description includes the target semantic main object. In order to measure the quality of adversarial examples and the perceptibility of applied perturbations, we use four evaluation metrics: SSIM [28], FID [9], LPIPS [30] and BRISQUE [18]. Implementation Details: Since our adversarial diffusion sampling does not require additional training to the original diffusion model, we use the pre-trained diffusion model in our experiment. We adapt LDM [21] with DDIM sampler [24] (the number of diffusion steps\ud835\udc47= 200). For surrogate models, we select four versions of CLIP [20], namely Resnet50, Resnet101, ViT-B/16 and ViT-B/32. For other hyperparameters, we use \ud835\udc60= 35,\ud835\udeff= 0.0025,\ud835\udc61\u2217= 0.2, k = 8 and \ud835\udf0f= 2, where \ud835\udc61\u2217= 0.2 means adding noise to \ud835\udc99in 0.2 \u00d7\ud835\udc47steps and then performing the reverse process. All the experiments are conducted on a Tesla A100 GPU with 40GB memory. The detailed introduction to the victim models, baselines, and evaluation metrics are given in Appendix 4. 5.2 Comparison Results Attack Comparison. To validate the effectiveness of AdvDiffVLM, we first evaluate the transferability of adversarial examples generated by AdvDiffVLM and baselines on various VLMs. As shown in Table 1, all methods exhibit favorable attack results in gray box scenarios. Within the transfer attack scenario, our method yields performance comparable to the SOTA method SIA-CWA. Specifically, our method surpasses SIA-CWA in BLIP2 and MiniGPT-4, marking improvements in the CLIP\ud835\udc61\ud835\udc4e\ud835\udc5fby 0.0479 and 0.0525, respectively. Although in the cases of BLIP and LLaVA, SIA-CWA exceeds the performance of our approach, it is particularly important to note that our method requires less than one-tenth of the time compared to SIA for generating adversarial examples, which makes our method more practical in practice. We will analyze efficiency issues further in Appendix 5.2. Additionally, it has been observed that AdvDiffuser exhibits suboptimal performance in challenging attack scenarios, particularly against VLMs. This is attributed to its direct application of GradCAM as the mask, which restricts the modifiable area for adversarial examples in demanding tasks, thereby diminishing attack effectiveness. Simultaneously, AdvDiffuser employs high-frequency adversarial noise to alter semantics. This adversarial noise, being inherently fragile, is significantly mitigated during the diffusion model\u2019s reverse process, further diminishing its attack potential on complex tasks. These observations validate the advantages of our GradCAM-guided Mask and score matching idea. Defense. We adopt SOTA defense method, i.e. DiffPure [19], to validate the robustness of our proposed method. The results are reported in Table 2. It can be found that our method outperforms baselines in both gray-box and black-box settings. For example, on Unidiffuser, for CLIP\ud835\udc61\ud835\udc4e\ud835\udc5fscore, our method is 0.0689 higher than Table 2: Defense results with DiffPure. The setting are the same as Table 1 except the adversarial examples are purified by DiffPure. In this table, CLIP\ud835\udc61\ud835\udc4e\ud835\udc5fevaluates the similarity between the purified examples and the target texts. Unidiffuser* BLIP BLIP2 MiniGPT-4 LLaVA Img2LLM CLIP\ud835\udc61\ud835\udc4e\ud835\udc5f\u2191 ASR \u2191 CLIP\ud835\udc61\ud835\udc4e\ud835\udc5f\u2191 ASR \u2191 CLIP\ud835\udc61\ud835\udc4e\ud835\udc5f\u2191 ASR \u2191 CLIP\ud835\udc61\ud835\udc4e\ud835\udc5f\u2191 ASR \u2191 CLIP\ud835\udc61\ud835\udc4e\ud835\udc5f\u2191 ASR \u2191 CLIP\ud835\udc61\ud835\udc4e\ud835\udc5f\u2191 ASR \u2191 Original 0.4802 0% 0.5124 0% 0.4924 0% 0.4831 0% 0.5253 0% 0.5302 0% Ens 0.4833 0% 0.5149 0% 0.4929 0% 0.4840 0% 0.5263 0% 0.5332 0% SVRE 0.4846 0% 0.5224 1% 0.4953 0% 0.4852 0% 0.5264 0% 0.5312 0% CWA 0.4873 2% 0.5268 1% 0.4973 0% 0.4901 1% 0.5272 1% 0.5307 0% SSA-Ens 0.4914 1% 0.5292 0% 0.5024 0% 0.4996 0% 0.5280 1% 0.5322 0% SSA-SVRE 0.4899 2% 0.5285 0% 0.4984 0% 0.4988 0% 0.5273 1% 0.5356 0% SSA-CWA 0.4868 2% 0.5312 1% 0.4997 0% 0.4997 2% 0.5283 3% 0.5367 1% SIA-Ens 0.4921 3% 0.5351 1% 0.5068 1% 0.5009 1% 0.5356 2% 0.5372 2% SIA-SVRE 0.4930 3% 0.5355 1% 0.5012 2% 0.5011 2% 0.5349 4% 0.5380 2% SIA-CWA 0.4942 5% 0.5379 2% 0.5099 3% 0.5025 2% 0.5360 4% 0.5388 2% AdvDiffuser\ud835\udc52\ud835\udc5b\ud835\udc60 0.4920 4% 0.5201 4% 0.4933 2% 0.4906 2% 0.5325 3% 0.5310 2% AdvDiffuser\ud835\udc4e\ud835\udc51\ud835\udc4e\ud835\udc5d\ud835\udc61\ud835\udc56\ud835\udc63\ud835\udc52 0.4922 4% 0.5227 4% 0.5001 3% 0.5001 3% 0.5336 3% 0.5325 2% AdvDiffVLM\ud835\udc60\ud835\udc56\ud835\udc5b\ud835\udc54\ud835\udc59\ud835\udc52 0.4902 1% 0.5322 0% 0.4995 0% 0.4904 0% 0.5327 1% 0.5258 0% AdvDiffVLM\ud835\udc52\ud835\udc5b\ud835\udc60 0.5129 12% 0.5515 7% 0.5102 3% 0.5096 4% 0.5444 8% 0.5419 2% AdvDiffVLM\ud835\udc5b\ud835\udc5c\ud835\udc5a\ud835\udc4e\ud835\udc60\ud835\udc58 0.5407 15% 0.5762 13% 0.5348 6% 0.5273 7% 0.5590 14% 0.5493 6% AdvDiffVLM 0.5302 15% 0.5707 11% 0.5226 5% 0.5184 6% 0.5551 11% 0.5450 4% AdvDiffVLM+ 0.5631 18% 0.5832 15% 0.5315 6% 0.5309 8% 0.5617 15% 0.5531 7% Table 3: Quality comparison of adversarial examples under four evaluation metrics. The best result is bolded, and the second-best result is underlined. Method SSIM \u2191 LPIPS \u2193 FID \u2193 BRISQUE \u2193 SSA-Ens 0.6687 0.3320 110.5 66.89 SSA-SVRE 0.6610 0.3325 112.6 70.05 SSA-CWA 0.6545 0.3673 123.4 67.67 SIA-Ens 0.6925 0.2990 117.3 55.61 SIA-SVRE 0.6920 0.3042 120.0 57.42 SIA-CWA 0.6892 0.3306 125.3 56.02 AdvDiffuser\ud835\udc52\ud835\udc5b\ud835\udc60 0.6520 0.3074 115.5 14.61 AdvDiffuser\ud835\udc4e\ud835\udc51\ud835\udc4e\ud835\udc5d\ud835\udc61\ud835\udc56\ud835\udc63\ud835\udc52 0.6471 0.3096 126.7 15.32 AdvDiffVLM\ud835\udc52\ud835\udc5b\ud835\udc60 0.6721 0.1834 90.4 17.48 AdvDiffVLM\ud835\udc5b\ud835\udc5c\ud835\udc5a\ud835\udc4e\ud835\udc60\ud835\udc58 0.7129 0.2687 111.9 16.92 AdvDiffVLM 0.7188 0.2358 96.1 16.80 AdvDiffVLM+ 0.7008 0.2577 104.4 19.17 SIA-CWA. On BLIP, for CLIP\ud835\udc61\ud835\udc4e\ud835\udc5fscore, our method is 0.0453 higher than SIA-CWA. These experimental results show that our method is more robust than baselines in evading DiffPure defense method. Additional defense results are provided in Appendix 5.3. Image Quality Comparison. We evaluate the image quality of the generated adversarial examples using four evaluation metrics: SSIM, FID, LPIPS, and BRISQUE. As shown in Table 3, compared to transfer-based attacks and AdvDiffuser, the adversarial examples generated by our method exhibit superior image quality. Additionally, we find that for the BRISQUE metric, AdvDiffuser is better than our method. However, as shown in Figure 4, the perturbation introduced by our method is semantic, while AdvDiffuser significantly alters the non-salient area, resulting in poor visual effects. Figure 4: Visualization of adversarial perturbations generated by different attack methods. Note that the first row represents adversarial examples, and the second row represents adversarial perturbations. We choose SIA-CWA and AdvDiffuser\ud835\udc4e\ud835\udc51\ud835\udc4e\ud835\udc5d\ud835\udc61\ud835\udc56\ud835\udc63\ud835\udc52as representatives of baselines. We amplify the perturbation values for better visualization. Figure 5: Visualization of the attack results of AdvDiffVLM on BLIP. We show the adversarial target text above the image, and display the image caption results of original image and adversarial example below the image. Figure 6: Ablation study of the impact of various parameters in AdvDiffVLM. We adopt the CLIP\ud835\udc61\ud835\udc4e\ud835\udc5fand LPIPS scores to show the impact of transferability and image quality with four VLMs. A higher CLIP\ud835\udc61\ud835\udc4e\ud835\udc5fvalue indicates better performance, whereas a lower LPIPS value signifies better results. We only vary one of the hyperparameters at a time, and then fix the other three hyperparameters to the preset values shown in Section 5.1. Note: the results of CLIP\ud835\udc61\ud835\udc4e\ud835\udc5fare presented using bar graphs, while LPIPS results are depicted using dot-line graphs. Else, SIA-CWA introduces irregular high-frequency noise, which can be easily found. More visualization results in Appendix 5.4. In summary, our method can generate adversarial examples with transferability comparable to the SOTA transfer-based attack SIACWA with a speedup of over 10X. More importantly, the generated adversarial examples have better image quality and exhibit better robustness to adversarial defense methods. 5.3 Visualize Results We visualize the attack results of our method on VLMs. As shown in Figure 5, our method successfully induces black-box VLMs to output adversarial target semantics. For example, for the adversarial target text \"This little girl is taking tennis lessons to learn how to play\", we successfully make BLIP output \"a little girl play with a teddy bear and a tennis ball\", while the original response to the image is \"an adult black and write panda bear\". See Appendix 5.5 for more visualization results. 5.4 Attack results on commercial VLMs Our method can successfully attack commercial VLMs such as GPT4V in black-box scenarios. For example, as shown in Figure 7, we can successfully attack the hosted GPT-4V API. Specifically, for the adversarial target text \"A bird standing on top of a beach next to water\", we successfully make GPT-4V output a similar target response, while the semantics of the original image is a dog. Additional visualize and quantitative results of attacks on GPT-4V and other commercial VLMs, such as Google\u2019s Gemini, Microsoft\u2019s Copilot, and Baidu\u2019s ERNIE Bot, are detailed in Appendix 5.6. 5.5 Ablation Experiments To further understand the effectiveness of the proposed method, we first discuss the role of each module. Are adaptive ensemble method beneficial for boosting the transferability? First, as shown in Tables 1 and 2, using the ensemble method has improved performance in terms of transferability and robustness compared to using a single loss. Second, we observe that the adaptive method has further improved performance compared with simple ensemble. For example, in Table 1, on BLIP, for the CLIP\ud835\udc61\ud835\udc4e\ud835\udc5fscore, the ensemble method improved by 0.0722 compared with the single loss, and after using the adaptive method, it Figure 7: Screenshots of successful attacks against GPT-4V API\u2019s image description. We give the adversarial target text on the right side of the image. Else, we mark the main objects of the adversarial target in red and the main objects in the GPT-4V response in green. further improved by 0.0358. This proves that the adaptive ensemble method can help improve transferability. Does GradCAM-guided Mask help trade-off image quality and transferability? As shown in Table 1, the use of GradCAMguided Mask results in a slight decrease in the transferability of adversarial examples. For example, on BLIP, after using GradCAMguided Mask, the CLIP\ud835\udc61\ud835\udc4e\ud835\udc5fscore decrease by 0.0082. However, as indicated in Table 3, the application of GradCAM-guided Mask leads to an improvement in the quality of adversarial examples. This highlights that GradCAM-guided Mask assists in balancing the visual effects and attack capabilities of adversarial examples. The impacts of parameters. We now discuss the impacts of AdvDiffVLM parameters (including the \ud835\udc60, \ud835\udc61\u2217, k, and \ud835\udeff) in Figure 6 by conducting tests on Unidiffuser, BLIP, BLIP2, and LLaVA. It is evident that all parameters influence the trade-off between transferability and image quality. Increasing values for parameters \ud835\udc60, \ud835\udc61\u2217, and \ud835\udeffenhance transferability but diminish the visual quality of adversarial examples. This is because larger values for these parameters result in a greater perturbation, allowing for the embedding of more adversarial semantics into the image. Conversely, increasing the value of k produces adversarial examples with improved visual effects but reduces transferability. The reason is that larger values of k result in a larger generated mask, making it more challenging to modify the important areas in the image. To achieve an optimal trade-off between transferability and image quality, we empirically select \ud835\udc60= 35,\ud835\udc61\u2217= 0.2, k = 8 and \ud835\udeff= 0.0025. 6 CONCLUSION In this work, we propose AdvDiffVLM, an unrestricted adversarial example generation method for VLMs. We designed the Adaptive Ensemble Gradient Estimation based on the idea of score matching. It embeds target semantics into adversarial examples with a 10x to 30x boost in speed over existing systems. At the same time, in order to achieve a trade-off between adversarial example quality and attack capabilities, we propose the GradCAM-guided Mask module. Extensive experiments demonstrate that AdvDiffVLM can efficiently generate adversarial examples with transferability compared to current SOTA transfer-based attacks. Simultaneously, these adversarial examples exhibit superior image quality and greater robustness to adversarial defense methods." + }, + { + "url": "http://arxiv.org/abs/2404.05505v1", + "title": "Taming Transformers for Realistic Lidar Point Cloud Generation", + "abstract": "Diffusion Models (DMs) have achieved State-Of-The-Art (SOTA) results in the\nLidar point cloud generation task, benefiting from their stable training and\niterative refinement during sampling. However, DMs often fail to realistically\nmodel Lidar raydrop noise due to their inherent denoising process. To retain\nthe strength of iterative sampling while enhancing the generation of raydrop\nnoise, we introduce LidarGRIT, a generative model that uses auto-regressive\ntransformers to iteratively sample the range images in the latent space rather\nthan image space. Furthermore, LidarGRIT utilises VQ-VAE to separately decode\nrange images and raydrop masks. Our results show that LidarGRIT achieves\nsuperior performance compared to SOTA models on KITTI-360 and KITTI odometry\ndatasets. Code available at:https://github.com/hamedhaghighi/LidarGRIT.", + "authors": "Hamed Haghighi, Amir Samadi, Mehrdad Dianati, Valentina Donzella, Kurt Debattista", + "published": "2024-04-08", + "updated": "2024-04-08", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV", + "cs.LG", + "cs.RO" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "Light detection and ranging (Lidar) is a critical sensor in autonomous vehicles, providing highly precise 3D en- vironmental scanning. However, realistic simulation of the Lidar sensor poses challenges, involving cumbersome tasks such as creating 3D object models along with run- ning computationally demanding physics-based algorithms. As an alternative, data-driven simulation models, particu- larly deep generative models have gained traction owing to their exceptional capacity to model high-dimensional data. Initially proposed for generating photo-realistic RGB im- ages, deep generative models have been adapted for Li- dar point cloud generation, progressing from early GAN- based frameworks [1] to the best-performing diffusion mod- els [14]. Diffusion models (DMs) for Lidar point cloud genera- tion excel mainly due to their stable training and iterative refinement during the sampling process. While they demon- strate proficiency in capturing the 3D shape of point clouds, 1H. Haghighi, A.Samadi, K. Debattista and V. Donzella are with WMG, University of Warwick, Coventry, U.K. (Corresponding author: Hamed.Haghighi@warwick.ac.uk) 2M. Dianati is with the School of Electronics, Electrical Engineering and Computer Science at Queen\u2019s University of Belfast and WMG at the University of Warwick (a) (b) (c) Figure 1. (a) The range image generated by diffusion model (R2DM [9]) exhibits less realistic raydrop noise compared to our provided sample and the real one. (b) we propose to sample range image in the latent space via Auto-Regressive (AR) trans- former [12]. (c) We then generate the raydrop mask and clean range image separately in the image space via VQ-VAE [2] de- coder. they face challenges in generating realistic Lidar raydrop noise, resulting in range images that appear unrealistic (re- fer to Figure 1a). This issue arises from the inherent denois- ing nature of DMs. We introduce a novel Lidar Generative Range Image Transformer (LidarGRIT) model to incorporate both pro- gressive generation and accurate raydrop noise synthesis. Our LidarGRIT works with the range image representation of Lidar point cloud, chosen for efficient processing and compatibility with image generative models. The genera- tion process of the LidarGRIT consists of an iterative sam- pling in the latent space via Auto-Regressive (AR) trans- former [12] (refer to Figure 1b), and decoding the sam- pled tokens to range images using an adapted Vector Quan- tised Variational Auto-Encoder (VQ-VAE [2]) model (refer to Figure 1c). We disentangle the generation of the range 1 arXiv:2404.05505v1 [cs.CV] 8 Apr 2024 image from the raydrop noise mask in the VQ-VAE de- coder, inspired by Dusty [8], and use a separate loss func- tion to reconstruct clean range images and raydrop masks during training. Furthermore, we realised that large VQ- VAE models, primarily designed for high-resolution RGB images, tend to overfit when applied to relatively low- resolution range images. To address this, we propose ge- ometric preservation, aiming to encourage the VQ-VAE to capture input geometry and provide more expressive latent tokens. We compare our LidarGRIT model to SOTA mod- els on KITTI-360 and KITTI odometry datasets. Our model outperforms on nearly all metrics, specifically excelling in the image-based metrics, SWD [5]. The contributions of this paper can be summarised as follows: erative model consisting of a two-step generation process: iterative token index sampling through an AR transformer and single-pass range image decoding via an adapted VQ- VAE. \u2022 Proposal of two novel techniques to enhance the gener- ation quality: incorporating a separate raydrop estima- tion loss and enforcing geometry perseverance to increase VQ-VAE generalisability. \u2022 Comprehensive evaluation of our LidarGRIT generation by comparing it with SOTA generative models using KITTI-360 and KITTI odometry datasets.", + "main_content": "Caccia et al. [1] were among the first researchers to apply deep generative models to Lidar point clouds. They converted Lidar point clouds into range images and adapted the Deep Convolutional GAN (DCGAN) [11] for point cloud generation. Building on this, Dusty [8, 10] was proposed, which integrates raydrop synthesis into the GAN training process. Another notable model, UltraLiDAR [13], adopts a VQ-VAE framework to learn a discrete and compact Lidar representation for point cloud restoration and generation. With the recent achievement of DMs, LidarGen [14] and R2DM [9] models were proposed, relying on the scorebased and denoising DMs frameworks, respectively. Our approach draws inspiration from Dusty [8] framework in the disentanglement of raydrop and range image generation, however, we differentiate by employing a nonadversarial and more stable training using VQ-VAE [2], treating the raydrop estimation as a binary classification problem. Our LidarGRIT shares similarities with UltraLiDAR [13] in its two-stage sampling using VQ-VAE and AR transformer. However, rather than using voxelised BirdsEye-View (BEV), we represent point clouds with range images that provide a lossless, more compact, and more computationally efficient format. Moreover, we focus on the raydrop noise generation and assess the point cloud generation on both image and point cloud representation. 3. Method The designing process of LidarGRIT involves three steps. First, we represent the Lidar point clouds as range images (Section 3.1). Next, we tokenise range images using the VQ-VAE encoder and decode them separately to obtain clean range image along with the raydrop noise mask (Section 3.2). Finally, we capture the token interactions using the AR transformer (Section 3.3). 3.1. Data Representation We employ different transformations to create range images for the KITTI-360 and KITTI-odometry datasets. For KITTI-360 generation, we use spherical projection, wherein each point in Cartesian coordinates (x, y, z) \u2208R3 is transformed into its spherical coordinates (r, \u03b8, \u03d5) as: \ufffd \ufffd wherein each point in Cartesian coordinates \u2208 is transformed into its spherical coordinates (r, \u03b8, \u03d5) as: \ufffd \ufffd is trans r = \ufffd We then width, sformed into its spherical coordinates (r, \u03b8, x2 + y2 + z2, \u03b8 = atan(y, x), \u03d5 = atan(z, \ufffd n quantise \u03b8 and \u03d5 into H and W bins with where H and W denote the vertical and h as: x2 + y2). \ufffd \ufffd We then quantise \u03b8 and \u03d5 into H and W bins with equal bin width, where H and W denote the vertical and horizontal angular resolutions of the Lidar sensor. This yields an image of size H \u00d7 W with each pixel containing the range of its associated point. Regarding KITTI-odometry, we employ scan unfolding representation due to the sensor\u2019s non-linear vertical spacing [8]. We partition the ordered sequence into H sub-sequences, with each sub-sequence indicating one elevation angle. Throughout the paper, we denote the input range image as x \u2208RH\u00d7W and ground-truth raydrop mask as xm \u2208{0, 1}H\u00d7W. 3.2. Adapting VQ-VAE 3.2. Adapting VQ-VAE We use and adapt VQ-VAE [2] to auto-encode the range image and raydrop mask for three purposes: tokenisation, downsampling and raydrop noise generation. The VQ-VAE model consists of an encoder E, a quantiser Q and a decoder G. VQ-VAE encoder E downsamples the input range image x into latent image \u02c6 z = E(x) \u2208Rh\u00d7w\u00d7nz. Quantiser Q generates tokens zq \u2208Rh\u00d7w\u00d7nz using a learnable codebook Z = {zk}K k=1 \u2282Rnz. Finally, VQ-VAE decoder G generates a clean range image \u02c6 xr \u2208RH\u00d7W and raydrop mask logits \u02c6 x\u03c0 \u2208RH\u00d7W based on the quantised latent image zq as [\u02c6 xr, \u02c6 x\u03c0] = G(zq). The mask logits are then converted to the binary mask \u02c6 xm \u2208{0, 1}H\u00d7W via sigmoid function and thresholding: \u02c6 xm = \ufffd 1 sigmoid(\u02c6 x\u03c0) \u22650.5 0 sigmoid(\u02c6 x\u03c0) < 0.5. (1) The final generated range image can be obtained as \u02c6 x = \u02c6 xm \u2299\u02c6 xr. 3.2.1 Training\u2013Raydrop Loss We separate the training objectives for range image and raydrop mask generation. To encourage VQ-VAE decoder G to 2 (a) Adapted VQ-VAE model (b) AR transformer model Figure 2. Overview of the training process. approximate the input range image, we use masked absolute error loss: Lrec(E, G) = Ex \u0002 1 H \u00d7 W\u2225xm \u2299(x \u2212\u02c6 xr)\u22251 \u0003 (2) This induces the decoder\u2019s range image channel to focus only on estimating the range of existing points. On the other hand, the mask channel is enforced to estimate raydrop noise via raydrop loss: LRL(E, G) = Ex \u0002 Avg[xm \u2299log(sigmoid(\u02c6 x\u03c0)) + (1 \u2212xm) \u2299log(1 \u2212sigmoid(\u02c6 x\u03c0))] \u0003 , (3) Where Avg[\u00b7] function calculates the average of the input across all its elements. To align the encoder and codebook embeddings, we train VQ-VAE with so-called commitment loss: Lcom(E, Z) = Ex \u0002 \u2225sg[\u02c6 z] \u2212zq\u22252 2 + \u2225sg[zq] \u2212\u02c6 z\u22252 2 \u0003 . The sg[\u00b7] operator denotes the straight-through gradient estimator, ensuring that the quantisation process remains differentiable. Total training loss for the adapted VQ-VAE can be calculated as: LVQ-VAE = Lrec(E, G) + \u03bbLRL(E, G) + Lcom(E, Z). (4) By tuning \u03bb, we can establish a trade-off between the realism of range image generation and that of raydrop mask. We set \u03bb = 0.1 in this study. We show the training process of the adapted VQ-VAE in Figure 2a. 3.2.2 Training\u2013Geometric Perseverance We observed that the VQ-VAE models, primarily designed for high-resolution RGB images, are prone to overfitting when dealing with relatively low-dimensional range images. This often results in less expressive latent codes. To mitigate the issue, we randomly distort the input images with geometric transformations F and push the VQ-VAE to reconstruct the distorted image. This encourages the VQVAE to prioritise and preserve the input image geometry. In practice, we randomly replace the input range image x and raydrop mask xm with their respective transformed versions F(x), F(xm) during the calculation of the training losses (Equation 2 and 3). Our choice of geometric transformations includes affine transformation as well as horizontal and vertical flips. 3.3. Auto-regressive Transformer With trained VQ-VAE, we can encode the range images into token indices s \u2208 \b 0, 1, 2, ..., |Z|\u22121 \th\u00d7w and use AR transformers to model interactions between tokens. We train the transformer by enforcing auto-regressive modelling. We estimate the likelihood of each token index si based on the previous indices s0pt(y)dy (Bossy and Talay, 1997). This stochastic representation lends itself to an efficient method for computing solutions to these PDEs in high dimensional settings. Traditional solvers such as finite differences or finite elements do not scale to high dimensions, making an approach such as this appropriate for finding the solution of the PDE. 4 Parameter estimation Having presented the neural architectures, we now present estimators, based on maximum likelihood, used in conjunction with the architectures without prior knowledge on the structure the drift. We first describe the likelihood function for use in cases with regularly sampled data. We then describe a bridge estimator for cases of irregularly sampled data. In addition, we describe an estimator for the generative architecture based on both the likelihood function and the transition density. For this section, we assume that we observe multiple paths, i.e., n {Xtj}(i) j=1...K o i=1...N. Full details of all algorithms are in the appendix. Neural McKean-Vlasov Processes 4.1 Maximum likelihood estimation We use an estimator based on the path-wise likelihood derived from Girsanov\u2019s theorem and an EulerMaruyama discretization for the likelihood, considered in Sharrock et al. (2021). The likelihood function is given as L(\u03b8; t1, tK) := exp \u0012 1 \u03c32 Z tK t1 b (Xs, ps, s; \u03b8) dXs \u22121 2\u03c32 Z tK t1 b (Xs, ps, s; \u03b8)2 ds \u0013 , (10) where b is the unknown drift represented as one of the presented architectures and \u03c3 is the diffusion coefficient in (1) and (2). Following discretization, with the approximations \u2206Xtj = Xtj+1 \u2212Xtj and \u2206tj = tj+1 \u2212tj, the log-likelihood is approximated by log L(\u03b8; t1, tK) \u2248 K\u22121 X j=1 b \u0000Xtj, ptj, tj; \u03b8 \u0001 (Xtj+1 \u2212Xtj) \u22121 2 K\u22121 X j=1 b \u0000Xtj, ptj, tj; \u03b8 \u00012 (tj+1 \u2212tj). Optimization is performed using standard gradient based optimizers with the drift b represented as one of the presented architectures. 4.2 Estimation with Brownian bridges Often data are not collected at uniform intervals in time, but rather, the time marginals may be collected at irregular intervals. In that case, we consider an interpolation approach to maximizing the likelihood building on the results of Lavenant et al. (2021) and Cameron et al. (2021) in the It\u00f4-SDE case. We can write the likelihood conditioned on the set of observations (dropping the particle index for ease of notation) as LBB(\u03b8) = EQ \uf8ee \uf8f0 Y j=1...K\u22121 1{Ztj+1=Xtj+1}L(\u03b8; tj, tj+1) \uf8f9 \uf8fb where {Zs : s \u2208[tj, tj+1]} is a Brownian bridge from Xtj to Xtj+1 and Q is the Wiener measure. Brownian bridges can easily be sampled and reused for computing the expectation. By applying Jensen\u2019s inequality, we can write an evidence lower bound (ELBO) as log LBB \u2265EQ \uf8ee \uf8f0 X j=1...K\u22121 log L(\u03b8; tj, tj+1)1{Ztj =Xtj} K j=1 \uf8f9 \uf8fb. (11) In this case, the estimator aims to fit the observed marginal distributions exactly while penalizing deviations from the Brownian bridge paths in regions without data. 4.3 Estimation with explicit marginal law Returning to the ML architecture described in Section 3.3, where we explicitly model the density pt with a generative network \u02c6 pt, our estimator should enforce the consistency between \u02c6 pt and the flow relating to the drift. We do so using the PDE in (3). Let the parameters of the drift be \u03b8 and the parameters of the generative model be \u03d5, we solve the optimization problem max \u03b8,\u03d5 E \u0002 L(\u03b8, \u03d5 | {Xtj}j=1...K) \u0003 s.t. (12) Z tj+1 tj \f \f \f\u02c6 ps(x; \u03d5) \u2212E h \u02c6 ptj+1 \u0010 \u02c6 Xtj+1; \u03d5 \u0011 | \u02c6 Xs = x i\f \f \f ds = 0 (13) for time intervals indexed by j = 1, . . . , K \u22121, state space x \u2208supp(Xt), and where the trajectories of \u02c6 Xt are given by the dynamics of the ML architecture, specifically d \u02c6 Xt = f( \u02c6 Xt, t; \u03b8)dt + Eyt\u223c\u02c6 pt(\u00b7;\u03d5) h \u03c6 \u0010 \u02c6 Xt, yt; \u03b8 \u0011i dt + \u03c3dWt. The likelihood at the observed margins is first maximized in (12). In (13), the marginals at previous times are regularized using the correspondence between the PDE and its associated SDE via the nonlinear Kolmogorov backwards equation (Buckdahn et al., 2017), which describes pt as an expectation of trajectories at a terminal time, i.e. pt(x) = E[pT (XT ) | Xt = x] for t < T. 5 Modeling properties Having discussed the architectures and estimators, we now discuss specific properties of the modeling framework, which follow from the theoretical discussion presented in Section 2. We first discuss how the factorization into \u03c6 and MF lends to an implicit regularization of the IM architecture. We then compare the gradient flows of It\u00f4-SDEs and MV-SDEs. We additionally provide more intuition on the proposed architectures in Appendix D. 5.1 Implicit regularization of the implicit measure architecture Closely related to the IM architecture are MLP representations of It\u00f4-SDEs, where we previously remarked Haoming Yang\u2217, Ali Hasan\u2217\u2020, Yuting Ng, Vahid Tarokh may model MV-SDEs. On the other hand, the factorization of the IM architecture into an interaction function and a learned measure leads to a type of implicit regularization when the parameters are estimated using gradient descent. Theorem 5.1 (Implicit Regularization). Suppose f and \u03c6 are known and fixed. Consider a mean-field architecture as described above with f, \u03c6 known and a linear structure, i.e. B(Xt, pt, t) = R \u03c6(Xt, y) dpt(y) + f(Xt, t). Further, assume that \u03c6 is twice differentiable. Then, for each time step t, the minimizing finite width MF with weight matrix W0 \u2208Rn\u00d7d and ith row W (i) 0 under gradient descent satisfies the following optimization problem min W0 X i=1...n X j=1...d \u03c6(Xt, W (i) 0 )j s.t. E \u0014 1 2\u2206t \u2225Xt+\u2206t \u2212Xt \u2212b(Xt, pt, t)\u22252 \u0015 = 0. Proof. We follow the blueprint in Belabbas (2020) and give full details in the appendix. Theorem 5.1 effectively says that the mean-field system approximated is the one that has the least influence from the other particles. In the case where \u03c6 can be decomposed as a norm, this amounts to finding the drift parameterized by weight W0 with smallest norm while still matching the marginals. 5.2 Gradient flows of the MV-SDE To illustrate the difference between the particle flows of MV-SDEs and It\u00f4-SDEs, we consider a gradient flow perspective to describe the functionals that are minimized according to the different SDEs (Villani, 2021, Section 8.3). In the following remark, we apply this idea to a gradient flow that minimizes the energy distance. Remark 5.2 (Minimizing the Energy Distance). Consider two densities pt, q such that pt \u226aq for all t. The gradient flow induced by the MV-SDE with the drift b(Xt, pt, t) =Eyt\u223cpt \u0014 \u2207 \u0012 2\u2225Xt \u2212yt\u2225dq dpt \u2212\u2225Xt \u2212yt\u22252 \u2212\u2225Xt \u2212yt\u22252 \u0012 dq dpt \u00132 \u0013\u0015 minimizes the energy distance between pt and q. The proof follows from a straightforward application of Santambrogio (2017, Section 4). Note that this construction is only possible through distributional dependence in MV-SDE whereas the standard It\u00f4 SDE does not satisfy this drift. This has particular impact on generative modeling which we will later discuss in our experiment section. 5.3 Relationship to attention Recently, works such as Sander et al. (2022) described the relationship between interacting particle systems and the attention structure in the transformer architecture. Here we briefly describe a motivation for using the proposed architectures in the sense that they describe a similar structure to attention. Recall that the attention module is defined by matrices WK, WQ \u2208RNW \u00d7d, WV \u2208RNV \u00d7d and the normalized attention matrix by \u03b1i,j = N exp(\u27e8WKX(i), WQX(j)\u27e9) PN k=1 exp(\u27e8WKX(i), WQX(k)\u27e9) . We focus on the attention matrix since it describes the dependence between particles X(i). We can rewrite the above equation as an expectation \u03b1i,j = exp(\u27e8WKX(i), WQX(j)\u27e9) E\u03bd[exp(\u27e8WKX(i), WQy\u27e9)], where the expectation is taken with respect to a discrete measure \u03bd = PN k=1 \u03b4X(k), as we do in the IM architecture. We can write the numerator as the expectation with an indicator and the denominator as the full expectation, \u03b1i,j = E[exp(\u27e8WKX(i), WQy\u27e9)1y=X(j)] E[exp(\u27e8WKX(i), WQy\u27e9)] . Finally, since we do not assume a particular structure on \u03c6 in the IM architecture, we can let \u03c6 be equal to the exponential of the dot product with the transformation by WK, WQ. Note that this is applied to particles at each time marginal t rather than for a sequence of particles. A sequence of particles would correspond to the case of non-exchangability, which is a direction of future work. 6 Numerical experiments For Q1 we discussed modeling and inferring distributional dependence. We now wish to answer Q2 and quantify the effect of explicit distributional dependence in machine learning tasks. We test the methods on synthetic and real data for time series and generative modeling. The main goal is to determine the difference between standard Neural It\u00f4-SDE and the proposed Neural MV-SDEs under different modeling scenarios. In that sense, the baseline we consider is the It\u00f4-SDE parameterized using an MLP However, we also consider other deep learning based methods for comparison Neural McKean-Vlasov Processes 0 2 4 5 0 5 Kuramoto, X1t 0 2 4 2.5 0.0 2.5 Fitzhugh-Nagumo, X1t 0 50 100 10 0 10 Opinion Dynamic, X1t 0 2 4 10 0 10 Meanfield Atlas, X1t 0 2 4 2 0 2 OU X1t 0 2 4 10 0 10 Circles X1t 1.0 0.5 0.1 Noise Level 0.4 0.6 MSE Kuramoto 1.0 0.5 0.1 Noise Level 0.6 0.8 Fitzhugh-Nagumo 1.0 0.5 0.1 Noise Level 0.04 0.05 Opinion Dynamic 1.0 0.5 0.1 Noise Level 1.5 2.0 Meanfield Atlas 1.0 0.5 0.1 Noise Level 0.2 0.4 OU 1.0 0.5 0.1 Noise Level 1.5 2.0 2.5 Circles MLP IM ML EM 5 5 6 6 (a) MLP 5 5 6 6 (b) EM 5 5 6 6 (c) IM 5 5 6 6 (d) ML 5 5 6 6 (e) Truth Figure 4: Top row: sample paths from the different synthetic datasets. Middle row: mean squared error (MSE) of different architectures\u2019 performance (average of 10 runs) on drift estimation, under the effect of different levels of observation noise. Bottom row: Example of estimated gradient flow of Kuramoto model at terminal time. The colors correspond to the density of generated samples at terminal time. in a broader context. We abbreviate the different architectures as the Neural It\u00f4-SDE (MLP) and Neural MV-SDEs: Empirical Measure (EM), Implicit Measure (IM) and Marginal Law (ML) architectures. These architectures were presented in Section 3 and summarized in Figure 3. Full descriptions of the models, baselines, and datasets are given in the appendix. Synthetic data experiments We first consider the application of MV-SDEs in physical, biological, social, and financial settings. These relate to the original development of MV-SDEs and consider how the architecture can be applied in scientific machine learning settings. We benchmark the proposed methods on 4 canonical MV-SDEs: the Kuramoto model which describes synchronizing oscillators (Sonnenschein and Schimansky-Geier, 2013), the mean-field FitzHughNagumo model which characterizes spikes in neuron activations (Mischler et al., 2016), the opinion dynamic model on the formation of opinion groups (Sharrock et al., 2021), and the mean-field atlas model for pricing equity markets (Jourdain and Reygner, 2015). These models exhibit the non-local behavior that was originally of theoretical interest. We additionally benchmark the proposed methods on two It\u00f4-SDEs: an Ornstein\u2013Uhlenbeck (OU) process and a circular motion equation to determine the performance on It\u00f4-SDEs. Finally, to understand the performance on discontinuous paths related to aggregation behavior, we benchmark the proposed methods on an OU process with jumps in Figure 5b. Since the true drifts of the synthetic data are known, we directly compare the estimated drifts to the true drifts using mean squared error (MSE). The performance on five different datasets with three different levels of added observational noise is presented in Figure 4. The bottom row of Figure 4 illustrates an example of the density and the gradient flow at the terminal time for the Kuramoto model. The proposed mean-field architectures outperform the standard MLP in modeling MV-SDEs; moreover, incorporating explicit distributional depedence does not diminish the performance in estimating It\u00f4-SDEs. When modeling processes with jump discontinuities, Figure 5b highlights the flexibility of the proposed methods, IM, ML, to match such models. The EM likely does not perform as well due to the high variance of the empirical measure, leading to difficulties in learning. Additionally, the MLP does not have an explicit decomposition between the MV and It\u00f4 components, resulting in issues when estimating the feedback between the particles inducing jumps. Real data experiments We consider two real examples: crowd trajectory in an open interacting environment, which is related to the Cucker-Smale model (Cucker and Smale, 2007; Warren, 2018) and chemically stimulated movement of organisms (chemotaxis), which can be described using the Keller-Segel Haoming Yang\u2217, Ali Hasan\u2217\u2020, Yuting Ng, Vahid Tarokh 0 5 4 25 MLP IM ML EM True (a) Average paths. 1 2 4 Number of Jumps 1 2 3 4 Energy Distance (b) Energy distance. Figure 5: Results for approximating sample paths containing jumps. 210 30 50 100 Dimension 9.5 9.0 8.5 ELBO MLP IM ML EM Figure 6: ELBO of generated paths from standard Gaussian to eight Gaussian mixture (in increasing dimension) evaluated against OT mapping. model (Toma\u0161evi\u0107, 2021; Keller and Segel, 1971). We evaluate the proposed architectures in these modeling tasks by comparing the goodness-of-fit of generated path samples to the observed path samples, measured in normalized MSE (normalized with sample variance) with respect to the held out data. We also benchmark against the DeepAR probabilistic time series forecaster (Salinas et al., 2020) with RNN, GRU, LSTM, and Transformer (TR) backbones as baseline models to compare the goodness-of-fit. This provides context of the performance within the standard deep learning-based time series forecasting methods. The performances of different architectures are presented in Table 1. For the EEG experiments, the proposed architectures generally perform better than the baselines in generating paths within the training time steps, and on par with the DeepAR architectures for forecasting (full results presented in the appendix). For the crowd trajectory data, the proposed MV-SDE architectures outperform the MLP, EM and DeepAR architectures for forecasting. Notably, the EM architecture exhibits high variance in the crowd trajectory data, indicating the difficulty of relying on the empirical margins to compute expectations. For chemotaxis data, the MV-SDE based architectures outperform the DeepAR baselines. Additional experiments and results are presented in appendix. Figures illustrating the sample paths are included in the appendix. Generative modeling experiments We focus on applying the bridge estimator discussed in Section 4.2 to map between a Gaussian and a target distribution for the purposes of generative modeling. This experiment is used to understand the effect of distributional dependence on the quality of samples generated. We are interested in studying two aspects: 1) the properties of the learned mapping, and 2) the generated trajectories. We first study the properties of the learned mapping using a synthetic eight Gaussian mixture with increasing dimensionality. We compare the performance of different architectures by evaluating the ELBO of the sample paths generated by the optimal transport (OT) mapping between the initial distribution and held out target samples. Then we evaluate the generated trajectories through the energy distance motivated by Remark 5.2 between generated and held-out data for 5 real data density estimation experiments, since the MV-SDE describes the flow that minimizes the energy distance. In addition, we compare to common density estimators of variational autoencoder (VAE) (Kingma and Welling, 2013), Wasserstein generative adversarial network (WGAN) (Gulrajani et al., 2017), masked autoregresive flow (MAF) (Papamakarios et al., 2017) and score-based generative modeling through SDEs, which corresponds to a constrained form of the MLP architecture (Song et al., 2020). The MV-SDE architectures not only outperform the It\u00f4 architecture for all dimensions in the eight Gaussian experiment, as shown in Figure 6, but also for the 5 real data density estimation experiments, as shown in Table 2, while outperforming common baselines. Sampling is performed using standard Euler-Maruyama, with full details in the appendix. This again suggests the MV-SDE provides a more amenable probability flow for generative modeling compared with the It\u00f4-SDE. 7 Discussion In this paper we discuss an alternative viewpoint of diffusion type models beyond the standard It\u00f4-SDE parameterization. In particular, we focus on MV-SDEs and discuss neural representations of a process that depends on the distribution, and ways of making this dependence more explicit. We demonstrated the efficacy of the proposed architectures on a number of synthetic and real benchmarks. The results suggest that the proposed architectures provide an improvement in certain time series and generative modeling applications, likely due to the more general probability flow that the MV-SDEs induce. Limitations We studied the implicit regularization of the IM architecture under gradient descent, and the extension of the analysis to the other proposed archiNeural McKean-Vlasov Processes Table 1: Time series estimation on held out trajectories. Values in bold and italic are best and second best, respectively. Crowd Traj C.Cres E.Coli MLP (It\u00f4) 0.068 (0.03) 0.096 (0.002) 0.080 (0.003) IM 0.034 (0.01) 0.094 (0.003) 0.080 (0.001) ML 0.016 (0.01) 0.093 (0.002) 0.084 (0.002) EM 0.091 (0.059) 0.093 (0.004) 0.086 (0.004) LSTM 1.408 (0.92) 1.159 (0.234) 0.585 (0.350) RNN 1.05 (0.54) 1.563 (1.070) 0.773 (0.092) GRU 1.339 (0.61) 0.826 (0.289) 0.568 (0.301) TR 2.732 (0.88) 1.503 (0.212) 1.204 (0.212) Table 2: Density estimation: energy distance between observed samples and generated samples. Values in bold and italic are best and second best, respectively. POW MINI HEP GAS CORT MLP (It\u00f4) 0.34 (0.1) 0.67 (0.05) 0.54 (0.05) 0.41 (0.08) 0.74 (0.06) IM 0.29 (0.08) 0.40 (0.0) 0.41 (0.03) 0.29 (0.08) 0.53 (0.03) ML 0.28 (0.08) 0.44 (0.03) 0.37 (0.03) 0.31 (0.06) 0.57 (0.03) EM 0.33 (0.1) 0.46 (0.04) 0.43 (0.05) 0.30 (0.03) 0.58 (0.037) VAE 1.2 (0.02) 2.1 (0.15) 1.8 (0.03) 1.5 (0.02) 2.4 (0.2) WGAN 1.2 (0.02) 2.1 (0.003) 1.8 (0.01) 1.3 (0.02) 2.2 (0.01) MAF 0.29 (0.04) 0.48 (0.01) 0.31 (0.02) 0.52 (0.03) 0.53 (0.03) Score 0.30 (0.05) 0.50 (0.02) 0.32 (0.03) 0.56 (0.04) 0.58 (0.02) tectures is important to understand the corresponding regularization. With regard to computing expectations, using a multilevel scheme (Szpruch et al., 2019) could improve accuracy while reducing computational cost. Future directions The proposed architectures provide a baseline to extend the work to estimation of alternative processes. Heterogeneity amongst the particles is a useful property in many types of systems, e.g. described in Lacker and Soret (2022). Extending the W0 architectures to the case of heterogeneous agents corresponds to introducing depth into the architecture (i.e. having multiple measures W0 to take the expectation with respect to). Additionally, solving inverse problems using Wasserstein gradient flows solved using MV-SDEs can be another application of the proposed methods (Crucinio et al., 2022). Interpreting the W0 architecture through the interpolation lens used in Szpruch et al. (2019) could also provide avenues for improvement of the architecture. Developing optimal estimators for MV-SDE based point processes using the proposed architectures to extend Ito-SDE based point process representations (e.g. in Hasan et al. (2023)) could be a useful direction for extension when observations are only given as arrival times of events. Finally, establishing convergence rates for the architectures such as the W0 or Xt architectures would possibly be a direction for further analysis on the proposed algorithms. Acknowledgements This work was supported in part by the Air Force Office of Scientific Research under award number FA955020-1-0397. AH was partially supported by an NSF Graduate Research Fellowship. Haoming Yang\u2217, Ali Hasan\u2217\u2020, Yuting Ng, Vahid Tarokh" + }, + { + "url": "http://arxiv.org/abs/2404.16022v1", + "title": "PuLID: Pure and Lightning ID Customization via Contrastive Alignment", + "abstract": "We propose Pure and Lightning ID customization (PuLID), a novel tuning-free\nID customization method for text-to-image generation. By incorporating a\nLightning T2I branch with a standard diffusion one, PuLID introduces both\ncontrastive alignment loss and accurate ID loss, minimizing disruption to the\noriginal model and ensuring high ID fidelity. Experiments show that PuLID\nachieves superior performance in both ID fidelity and editability. Another\nattractive property of PuLID is that the image elements (e.g., background,\nlighting, composition, and style) before and after the ID insertion are kept as\nconsistent as possible. Codes and models will be available at\nhttps://github.com/ToTheBeginning/PuLID", + "authors": "Zinan Guo, Yanze Wu, Zhuowei Chen, Lang Chen, Qian He", + "published": "2024-04-24", + "updated": "2024-04-24", + "primary_cat": "cs.CV", + "cats": [ + "cs.CV" + ], + "label": "Original Paper", + "paper_cat": "Diffusion AND Model", + "gt": "As a special category of customized text-to-image (T2I) generation [5, 30, 12, 17, 40, 42], identity (ID) customization allow users to adapt pre-trained T2I diffusion models to align with their personalized ID. One line of work [5, 30, 12, 17] fine-tunes certain parameters on several images with the same ID provided by the user, thereby embedding the ID into the generative model. These methods have spawned many popular AI portrait applications, such as PhotoAI and EPIK. While tuning-based solutions have achieved commendable results, customizing for each ID requires tens of minutes of fine-tuning, thus making the personalization process economically expensive. Another line of work [41, 42, 2, 36, 20, 19, 38] forgoes the necessity of fine-tuning for each ID, instead resorting to pre-training an ID adapter [11, 24] on an expansive portrait dataset. These methods typically utilize an encoder (e.g., CLIP image encoder [27]) to extract the ID feature. The extracted feature is then integrated into the base diffusion model in a specific way (e.g., embedded into cross-attention layer). Although highly efficient, these tuning-free methods face two significant challenges. \u2022 Insertion of ID disrupts the original model\u2019s behavior. A pure ID information embedding should feature two characteristics. Firstly, an ideal ID insertion should alter only ID-related aspects, such as face, hairstyle, and skin color, while image elements not directly associated with the specific identity, such as background, lighting, composition, and style, should be consistent with the behavior of the original model. To our knowledge, this point has not been focused by previous works. While some research [42, 38, 20] has shown the ability for stylized ID generation, notable style degradation occurs when compared with images before ID insertion (as depicted in Fig. 1). Methods with higher ID fidelity tend to induce more severe style degradation. Secondly, after the ID insertion, it should still retain the ability of the original T2I model to follow prompts. In the context of ID customization, this generally implies the capacity to alter ID attributes (e.g., age, gender, expression, and hair), orientation, and accessories (e.g., glasses) via prompts. To achieve these features, current solutions generally fall into two categories. The first category involves enhancing the encoder. IPAdapter [42, 1] shifted from early-version CLIP extraction of grid features to utilizing face recognition backbone [4] to extract more abstract and relevant ID information. Despite the improved editability, the ID fidelity is not high enough. InstantID [38] builds on this by including an additional ID&Landmark ControlNet [43] for more effective modulation. Even though the ID similarity improves significantly, it compromises some degree of editability and flexibility. The second category of methods [20] supports non-reconstructive training to enhance editability by constructing datasets grouped by ID; each ID includes several images. However, creating such datasets demands significant effort. Also, most IDs correspond to a limited number of celebrities, which might limit their effectiveness on non-celebrities. \u2022 Lack of ID fidelity. Given our human sensitivity to faces, maintaining a high degree of ID fidelity is crucial in ID customization tasks. Inspired by the successful experience of face generation [29, 39] tasks during the GAN era [7], a straightforward idea for improving ID fidelity is to introduce ID loss within diffusion training. However, due to the iterative denoising nature of diffusion models [10], achieving an accurate x0 needs multiple steps. The resource consumption for training in this manner can be prohibitively high. Consequently, some methods [2] predict x0 directly from the current timestep and then calculate the ID loss. However, when the current timestep is large, the predicted x0 is often noisy and flawed. Calculating ID loss under such conditions is obviously inaccurate, as the face recognition backbone [4] is trained on photo-realistic images. Although some workarounds have been proposed, such as calculating ID loss only at less noisy timesteps [25] or predicting x0 with an additional inference step [45], there still remains room for improvement. In this work, to maintain high ID fidelity while reducing the influence on the original model\u2019s behavior, we propose PuLID, a pure and lighting ID customization method via contrastive alignment. Specifically, we introduce a Lightning T2I branch alongside the standard diffusion-denoising training branch. Leveraging recent fast sampling methods [23, 32, 21], the lighting T2I branch can generate high-quality images from pure noise with a limited and manageable number of steps. With this additional branch, we can simultaneously address the two challenges mentioned above. Firstly, to minimize the influence on the original model\u2019s behavior, we construct a contrastive pair with the same prompt and initial latent, with and without ID insertion. During the Lightning T2I process, we align the UNet features between the contrastive pair semantically, instructing the ID adapter how to insert 2 ID information without affecting the behavior of the original model. Secondly, as we now have the precise and high-quality generated x0 after ID insertion, we can naturally extract its face embedding and calculate an accurate ID loss with the ground truth face embedding. It is worth mentioning that such x0 generation process aligns with the actual test setting. Our experiments demonstrate that optimizing the ID loss in this context can significantly increase ID similarity. The contributions are summarized as follows. (1) We propose a tuning-free method, namely, PuLID, which preserves high ID similarity while mitigating the impact on the original model\u2019s behavior. (2) We introduce a Lightning T2I branch alongside the regular diffusion branch. Within this branch, we incorporate a contrastive alignment loss and ID loss to minimize the contamination of ID information on the original model while ensuring fidelity. Compared to the current mainstream approaches that improve the ID encoder or datasets, we offer a new perspective and training paradigm. (3) Experiments show that our method achieves SOTA performance in terms of both ID fidelity and editability. Moreover, compared to existing methods, our ID information is less invasive to the model, making our method more flexible for practical applications.", + "main_content": "Tuning-based Text-to-image ID Customization. ID Customization for text-to-image models aims to empower pre-trained models to generate images of specific identities while following the text descriptions. Two seminal tuning-based works [5, 30] strive towards this goal. Textual Inversion [5] optimizes a new word embedding for the user-provided ID, and Dreambooth [30] fine-tunes the entire generator to further enhance fidelity. Subsequently, various approaches [12, 17, 8, 35] have explored different fine-tuning paradigms in the generator and embedding space to achieve superior ID fidelity and text alignment. Despite these advancements, the time-consuming optimization process for each ID, taking at least several minutes, restricts its broader application. Tuning-free Text-to-image ID Customization. To ease the resource demand necessitated by online tuning, a series of tuning-free methods [36, 38, 25, 42, 20, 41, 3] have emerged, which directly encode ID information into the generation process. The major challenge these methods encounter is minimizing disruption to the original behavior of T2I models while still maintaining high ID fidelity. In terms of minimizing the disruption, one plausible approach is to utilize a face recognition model [4] to extract more abstract and relevant facial domain-specific representations, as done by IP-ApdaterFaceID [1] and InstantID [38]. A dataset comprising multiple images from the same ID can facilitate the learning of a common representation [20]. Despite the progress made by these approaches, they have yet to fundamentally solve the disruption issue. Notably, models with higher ID fidelity often cause more significant disruptions to the behavior of the original model. In this study, we propose a new perspective and training method to tackle this issue. Interestingly, the suggested method does not require laborious dataset collection grouped by ID, nor is it confined to a specific ID encoder. To improve ID fidelity, ID loss is employed in previous works [16, 2], motivated by its effectiveness in prior GAN-based works [29, 39]. However, in these methods, x0 is typically directly predicted from the current timestep using a single step, often resulting in noisy and flawed images. Such images are not ideal for the face recognition models [4], as they are trained on real-world images. PortraitBooth [25] alleviates this issue by only applying ID loss at less noisy stages, which ignores such loss in the early steps, thereby limiting its overall effectiveness. Diffswap [45] obtains a better predicted x0 by employing two steps instead of just one, even though this estimation still contains noisy artifacts. In our work, with the introduced Lightning T2I training branch, we can calculate ID loss in a more accurate setting. We notice a concurrent work, LCM-Lookahead [6], which also uses fast sampling technology (i.e., LCM [23]) to achieve a more precise prediction of x0. However, there are several differences between this work and ours. Firstly, LCM-Lookahead makes a precise prediction of x0 during the conventional diffusion-denoising process, whereas we start from pure noise and iteratively denoise to x0. Our approach, which aligns better with actual testing settings, makes the optimization of ID loss more direct. Secondly, to enhance prompt editing capability, LCM-Lookahead capitalized on the mode collapse phenomenon of SDXL-Turbo [32] to synthesis an ID-consistent dataset. However, the synthetic dataset might face diversity and consistency challenges, and the authors found that training with this dataset may lean towards stylized results more frequently than other methods. In contrast, 3 our method does not need an ID-grouped dataset. Instead, we enhance prompt follow ability through a more fundamental and intuitive approach, namely, contrastive alignment. Fast Sampling of Diffusion Models. In practice, diffusion models are typically trained under 1000 steps. During inference, such a lengthy process can be shortened to a few dozen steps with the help of advanced sampling methods [33, 22, 15]. Recent distill-based works [21, 23, 32] further accelerate this generation process within 10 steps. The core motivation is to guide the student network to align with points further from the base teacher model. In this study, the Lightning T2I training branch we introduce leverages the SDXL-Lightning [21] acceleration technology, thus enabling us to generate high-quality images from pure noise in just 4 steps. 3 Methods prompts list watercolor sketch cinematic ... Accurate ID Loss Lightning T2I branch ..... ..... ... X 4 ResNet Self-Attn Cross-Attn ... ResNet Self-Attn Cross-Attn ID Loss Contrastive Pair X 1 Conventional Diffusion branch + Arcface MLP ID Encoder MLP VIT unet + Diffusion Loss Alignment Loss Alignment Loss T2I w/ ID T2I w/o ID predict ID Encoder path w/ ID path w/o ID prompt prompt Figure 2: Overview of PuLID framework. The upper half of the framework illustrates the conventional diffusion training process. The face extracted from the same image is employed as the ID condition Cid. The lower half of the framework demonstrates the Lightning T2I training branch introduced in this study. It leverages the recent fast sampling methods to iteratively denoise from pure noise to high-quality images in a few steps (4 in this paper). In this branch, we construct contrastive paths with and without ID injection and introduce an alignment loss to instruct the model on how to insert ID condition without disrupting the original model\u2019s behavior. As this branch can produce photo-realistic images, it implies that we can achieve a more accurate ID loss for optimization. 3.1 Preliminary Diffusion models [10] are a class of generative models capable of synthesizing desired data samples through iterative denoising. A conventional diffusion training encapsulates two procedures, the forward diffusion process and reverse denoising process. During the diffusion process, noise \u03f5 is sampled and added to the data sample x0 based on a predefined noise schedule. This process yields a noisy sample xt at timestep t. Conversely, during the denoising process, a denoisng model \u03f5\u03b8 takes xt, t, and optional additional conditions C as inputs to predict the added noise, the optimization process can be articulated as: Ldiff = Ex0,\u03f5,t(\u2225\u03f5 \u2212\u03f5\u03b8(xt, t, C)\u2225). (1) The denoising model \u03f5\u03b8 in modern T2I diffusion models [31, 28, 26] is predominantly a UNET composed of residual blocks [9], self-attention layers, and cross-attention [37] layers. The prompt, as a condition, is embedded into the cross-attention layers adhering to the attention mechanism, 4 illustrated as follows: ( Attention(Q, K, V ) = Softmax( QKT \u221a d )V K = WK\u03c4txt(Ctxt); V = WV \u03c4txt(Ctxt), (2) where Q is projected from the UNET image features, \u03c4txt denotes a pre-trained language model that converts prompt Ctxt to textual features, WK and WV are the learned linear layers. ID Customization in T2I diffusion introduces ID images Cid as an additional condition, working together with the prompt to control image generation. Tuning-free customization [14, 41, 42] methods typically employ an encoder to extract ID features from Cid. The encoder often includes a frozen backbone, such as CLIP image encoder [27] or face recognition backbone [4], along with a learnable head. A simple yet effective technique to embed the ID features to the pre-trained T2I model is to add parallel cross-attention layers to the original ones. In these parallel layers, learnable linear layers are introduced to project the ID features into Kid and Vid for calculating attention with Q. This technique, proposed by IP-Adapter [42], has been widely used, we also adopt it for embedding ID features in this study. 3.2 Basic Settings We build our model based on the pre-trained SDXL [26], which is a SOTA T2I latent diffusion model. Our ID encoder employs two commonly used backbones within the ID customization domain: the face recognition model [4] and the CLIP image encoder [27], to extract ID features. Specifically, we concatenate the feature vectors from the last layer of both backbones (for the CLIP image encoder, we use the CLS token feature), and employ a Multilayer Perceptron (MLP) to map them into 5 tokens as the global ID features. Additionally, following ELITE\u2019s approach [40], we use MLPs to map the multi-layer features of CLIP to another 5 tokens, serving as the local ID features. It is worth noting that our method is not restricted to a specific encoder. 3.3 Discussion on Common Diffusion Training in ID Customization Currently, tuning-free ID customization methods generally face a challenge: the embedding of the ID disrupts the behavior of the original model. The disruption manifests in two ways: firstly, the ID-irrelevant elements in the generated image (e.g., background, lighting, composition, and style) have changed extensively compared to before the ID insertion; secondly, there is a loss of prompt adherence, implying we can hardly edit the ID attributes, orientations, and accessories with the prompt. Typically, models with higher ID fidelity suffer more severe disruptions. Before we present our solutions, we first analyze why conventional diffusion training would cause this issue. In conventional ID Customization diffusion training process, as formulated in Eq. 1, the ID condition Cid is usually cropped from the target image x0 [42, 38]. In this scenario, the ID condition aligns completely with the prompt and UNET features, implying the ID condition does not constitute contamination to the T2I diffusion model during the training process. This essentially forms a reconstruction training task. So, to better reconstruct x0 (or predict noise \u03f5), the model will make the utmost effort to use all the information from ID features (which may likely contain ID-irrelevant information), as well as bias the training parameters towards the dataset distribution, typically in the realistic portrait domain. Consequently, during testing, when we provide a prompt that is in conflict or misaligned with the ID condition, such as altering ID attributes or changing styles, these methods tend to fail. This is because there exists a disparity between the testing and training settings. 3.4 Uncontaminated ID Insertion via Contrastive Alignment While it is difficult to ascertain whether the insertion of ID disrupts the original model\u2019s behavior during the conventional diffusion training, it is rather easy to recognize under the test settings. For instance, we can easily observe whether the elements of the image change after the ID is embedded, and whether it still possesses prompt follow ability. Thus, our solution is intuitive. We introduce a Lightning T2I training branch beyond the conventional diffusion-denoising training branch. Just like in the test setting, the Lighting T2I branch starts from pure noise and goes through the full iterative denoising steps until reaching x0. Leveraging recent fast sampling methods [23, 32, 21], the Lighting T2I branch can generate high-quality images from pure noise with a limited and manageable number 5 baseline + + ID \" a man, riding a bike, sketch \" T2I w/o ID Figure 3: Effect of Lalign-sem and Lalign-layout. of steps. Concretely, we employ SDXL-Lightning [21] with 4 denoising steps. We prepare a list of challenging prompts that can easily reveal contamination, as shown in Table 3. During each training iteration, a random prompt from this list is chosen as the textual condition for the Lightning T2I branch. Then, we construct contrastive paths that start from the same prompt and initial latent. One path is conditioned only by the prompt, while the other path employs both the ID and the prompt as conditions. By semantically aligning the UNET features on these two paths, the model will learn how to embed ID without impacting the behavior of the original model. The overview of our method is shown in Fig. 2. We chose to align the contrastive paths in their corresponding UNET\u2019s cross-attention layers. Specifically, we denote the UNET features in the path without ID embedding as Qt, whereas the corresponding UNET features in the contrastive path with ID embedding as Qtid. For simplicity, we omit the specific layers and denoising steps here. In actuality, alignment is conducted across all layers and time steps. Our alignment loss consists of two components: the semantic alignment loss and the layout alignment loss. We use textual features K to query the UNET features Q. For each token in K, it will calculate the correlation with Q, and further aggregate Q based on the correlation matrix. Analogous to Eq. 2, the attention mechanism here can be expressed as Attention(K, Q, Q), which can be interpreted as the response of the UNET features to the prompt. The insight behind our semantic alignment loss is simple: if the embedding of ID does not affect the original model\u2019s behavior, then the response of the UNET features to the prompt should be similar in both paths. Therefore, our semantic alignment loss Lalign-sem can be formulated as follows: Lalign-sem = \r \r \r \rSoftmax(KQT tid \u221a d )Qtid \u2212Softmax(KQT t \u221a d )Qt \r \r \r \r 2 . (3) As illustrated in Fig. 3, the introduction of Lalign-sem significantly mitigates the issue of ID information contaminating the model\u2019s behavior. However, it cannot guarantee layout consistency, so we add a layout alignment loss Lalign-layout, which is defined as: Lalign-layout = \u2225Qtid \u2212Qt\u22252 . (4) The full alignment loss is formulated as Lalign = \u03bbalign-semLalign-sem + \u03bbalign-layoutLalign-layout, (5) where \u03bbalign-sem and \u03bbalign-layout serve as hyperparameters that determine the relative importance of each loss item. In practice, we set \u03bbalign-layout to a relatively small value, as we found that a larger value compromises the ID fidelity. 3.5 Optimizing ID Loss in a More Accurate Setting In ID Customization tasks, ensuring a high degree of ID fidelity is essential, given our innate human sensitivity towards discerning facial features. To improve the ID fidelity, aside from enhancements on the ID encoder [42, 38, 44], another universal and parallel improvement is the introducing of an ID loss [4, 2, 25] during the training. However, these methods directly predict x0 at the t-th timestep in the diffusion training process, only using a single step. This will produce a noisy and flawed predicted x0, subsequently leading to inaccurate calculation of ID loss. To ease this issue, recent work [25] proposes to only applying the ID loss on less noisy stages. However, since the ID loss only affects a 6 portion of timesteps, which may potentially limit the full effectiveness of it. In this study, thanks to the introduced Lightning T2I branch, the above issue can be fundamentally resolved. Firstly, we can swiftly generate an accurate x0 conditioned on the ID from pure noise within 4 steps. Consequently, calculating the ID loss on this x0, which is very close to the real-world data distribution, is evidently more precise. Secondly, optimizing ID loss in a setting that aligns with the testing phase, is more direct and effective. Formally, the ID loss Lid is defined as: Lid = CosSim (\u03d5(Cid), \u03d5(L-T2I(xT , Cid, Ctxt))) , (6) where xT denotes the pure noise, L-T2I represents the Lightning T2I branch, and \u03d5 denotes the face recognition backbone [4]. To generate photo-realistic faces, we fix the prompt Ctxt to \"portrait, color, cinematic\". 3.6 Full Objective The full learning objective is defined as: Lid = Ldiff + Lalign + \u03bbidLid. (7) During training, only the newly introduced MLPs and the learnable linear layers Kid and Vid in cross-attention layers are optimized with this objective, with the rest remaining frozen. 4 Experiments 4.1 Implementation Details We build our PuLID model based on SDXL [26] and the 4-step SDXL-Lightning [21]. For the ID encoder, we use antelopev2 [4] as the face recognition model and EVA-CLIP [34] as the CLIP Image encoder. Our training dataset comprises 1.5 million high-quality human images collected from the Internet, with captions automatically generated by BLIP-2 [18]. Our training process consists of three stages. In the first stage, we use the conventional diffusion loss Ldiff to train the model. In the second stage, we resume from the first stage model and train with the ID loss Lid (we use arcface-50 [4] to calculate ID loss) and diffusion loss Ldiff. This model strives for the maximum ID fidelity without considering the contamination to the original model. In the third stage, we add the alignment loss Lalign and use the full objective as shown in Eq. 7 to fine-tune the model. We set the \u03bbalign-sem to 0.6, \u03bbalign-layout to 0.1, and \u03bbid to 1.0. In the Lightning T2I training branch, we set the resolution of the generated image to 768 \u00d7 768 to conserve memory. Training is performed with PyTorch and diffusers on 8 NVIDIA A100 GPUs in an internal cluster. 4.2 Test Settings For consistency in comparison, unless otherwise specified, all the results in this paper are generated with the SDXL-Lightning [21] base model over 4 steps using the DPM++ 2M sampler [15]. The CFG-scale is set to 1.2, as recommended by [21]. Moreover, for each comparison sample, all methods utilize the same seed. We find that the comparison methods, namely InstantID [38] and IPAdapter (more specifically, IPAdapter-FaceID [1]) are highly compatible with the SDXL-Lightning model. Compared to using SDXL-base [26] as the base model, employing SDXL-Lightning results in InstantID generating more natural and aesthetically pleasing images, and enables IPAdapter to achieve higher ID fidelity. We also provide a quantitative comparison with these methods on SDXL-base, and the conclusions remaining consistent with those on SDXL-Lightning. To more effectively evaluate these methods, we collected a diverse portrait test set from the internet. This set covers a variety of skin tones, ages, and genders, totaling 120 images, which we refer to as DivID-120. As a supplementary resource, we also used a recent open-source test set, Unsplash-50 [6], which comprises 50 portrait images uploaded to the Unsplash website between February and March 2024. 4.3 Qualitative Comparison As shown in Fig. 4, when compared to SOTA methods such as IPAdapter and InstantID, our PuLID tends to achieve higher ID fidelity while creating less disruption to the original model. From rows 7 T2I w/o ID + anime + pikachu + portrait + cinematic ID PuLID(ours) InstantID IPAdapter + portrait + paper art + portrait + side view + playing piano + wearing a hat + on the beach + white dress + portrait + child + portrait + mask + cartoony + mario Figure 4: Qualitative comparisons. T2I w/o ID represents the output generated by the original T2I model without inserting ID, which reflects the behavior of the original model. Our PuLID achieves higher ID fidelity while causing less disruption to the original model. As the disruption to the model is reduced, results generated by PuLID accurately reproduce the lighting (1st row), style (4th row), and even layout (5th row) of the original model. This unique advantage broadens the scope for a more flexible application of PuLID. 8 Table 1: Quantitative comparisons. Comparison of ID cosine similarity with SOTA methods across different base models and datasets. PuLID (maximum ID sim) represents the model from second training stage. SDXL-Lightning SDXL-base DivID-120 Unsplash-50 DivID-120 Unsplash-50 PhotoMaker 0.271 0.193 IPAdapter 0.619 0.615 0.597 0.572 InstantID 0.725 0.614 0.755 0.648 PuLID (maximum ID sim) 0.761 0.708 0.773 0.711 PuLID (ours) 0.733 0.659 0.734 0.666 1, 2, 5, 6, and 7, it is clear that our method can attain high ID similarity in realistic portrait scenes and delivers better aesthetics. Conversely, other methods either fall short in ID fidelity or show diminished aesthetics compared to the base model. Another distinct advantage of our approach is that as the disruption to the model decreases, the results produced by PuLID accurately replicate the lighting (1st row), style (4th row), and even layout (5th row) of the original model. In contrast, although comparative methods can also perform stylization, notable style degradation can be noticed when compared to the original model. Finally, our model also possesses respectable prompt-editing capabilities, such as changing orientation (2nd row), altering attributes (6th row), and modifying accessories (7th row). More qualitative comparison can be found in Fig. 1. 4.4 Quantitative Comparison To quantitatively compare the methods, we adopt the ID cosine similarity to evaluate ID fidelity, with ID embeddings extracted using CurricularFace [13]. CurricularFace is different from the face recognition models we use in the ID encoder and for ID loss calculation. Table 1 presents the quantitative results. As seen in the table, our model that strives for maximum ID fidelity (the model from the second training stage) outperforms existing methods on all test sets and base models. Even after introducing the alignment loss and sacrificing some ID fidelity, our final model, PuLID, still outperforms the comparative methods in most scenarios, except for when employing SDXL-base as the base model and being slightly inferior to InstantID on DivID-120. Moreover, Table 1 shows that InstantID and IPAdapter are well compatible with SDXL-Lightning. IPAdapter exhibits some metric improvements after utilizing SDXL-Lightning. Despite a minor decline in metrics when transitioning to SDXL-Lightning, InstantID sees substantial improvements in image quality and usability (refer to supp for more details). Furthermore, our method still outperforms InstantID when using SDXL-base as the base model. We observed that PhotoMaker shows limited compatibility with SDXL-Lightning, suffering significant performance degradation as a result. Hence, we only compare its performance on SDXL-base in this table. 4.5 More Applications We provide more applications of our PuLID in Fig. 5, encompassing style alterations (1st row), IP fusion (2nd row), accessories modification (3rd row), recontextualization (4th row), attributes editing (5th row), transformation from non-photo-realistic domains to photo-realistic ones (6th row), and ID mixing (7th row). 4.6 Ablation Alignment loss ablation. Fig. 6 displays a comparison between models trained with and without the alignment loss Lalign. As observed, without Lalign, the embedding of ID severely disrupts the behavior of the original model. This disruption manifests as an inability for the prompt to precisely modify style (columns 2-3) and orientation (column 4). Also, the layout would collapse to the extent that the face occupies the majority of the image area, resulting in a diminished diversification of the layout. However, with the introduction of our alignment loss, this disruption can be significantly reduced. 9 cg cyberpunk fantasy ID ID ID saiyan werewolf cartoon ID ID ID armor glasses animal ears ID ID ID smile woman old ID ID ID convert ID ID ID convert convert ID snow ID water ID space ID 1 mix ID 2 + ID 1 mix + ID 2 ID 1 mix + ID 2 Figure 5: More applications. Including style changes, IP fusion, accessory modification, recontextualization, attribute editing, transformation from non-photo-realistic domain to photo-realistic domain, and ID mixing. Note that all these high-quality images are generated in just 4 steps with SDXL-Lightning model, without the need for additional Lora. 10 ID ID ID ID + playing piano + wearing a hat + on the beach + white dress + upper body + playing basketball + watercolor + upper body + Zelda + 2d + portrait + side view w/o w/ T2I w/o ID Figure 6: Alignment loss ablation. Table 2: ID loss ablation. DivID-120 Unsplash-50 Baseline (Stage1) 0.561 0.514 w/ ID Loss naive 0.652 0.601 w/ ID Loss (Stage2) 0.761 0.708 ID loss ablation. Table 2 illustrates the improvement in ID fidelity using the naive ID loss (directly predicting x0 from current timestep) and the more accurate ID loss Lid introduced in this paper, in comparison to the baseline. As observed, Lid can accomplish a greater improvement compared to the naive ID loss. We attribute this to the more precise x0 provided by the Lightning-T2I branch, which also better aligns with the testing setting, thereby making the optimization of ID loss more direct and effective." + } + ] +} \ No newline at end of file