mishig HF Staff commited on
Commit
08758e9
·
verified ·
1 Parent(s): 63eeaa4

Add 1 files

Browse files
Files changed (1) hide show
  1. 2501/2501.05823.md +488 -0
2501/2501.05823.md ADDED
@@ -0,0 +1,488 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: Effortlessly Improving Personalized Face with Human-Object Interaction Generation
2
+
3
+ URL Source: https://arxiv.org/html/2501.05823
4
+
5
+ Published Time: Mon, 13 Jan 2025 01:28:14 GMT
6
+
7
+ Markdown Content:
8
+ \useunder
9
+
10
+ \ul
11
+
12
+ Xinting Hu 1 1 1 Haoran Wang 1 1 1 Jan Eric Lenssen Bernt Schiele
13
+
14
+ xhu@mpi-inf.mpg.de hawang@mpi-inf.mpg.de jlenssen@mpi-inf.mpg.de schiele@mpi-inf.mpg.de
15
+
16
+ Max Planck Institute for Informatics, Saarland Informatics Campus, Germany
17
+
18
+ ###### Abstract
19
+
20
+ We introduce PersonaHOI, a training- and tuning-free framework that fuses a general StableDiffusion model with a personalized face diffusion (PFD) model to generate identity-consistent human-object interaction (HOI) images. While existing PFD models have advanced significantly, they often overemphasize facial features at the expense of full-body coherence, PersonaHOI introduces an additional StableDiffusion (SD) branch guided by HOI-oriented text inputs. By incorporating cross-attention constraints in the PFD branch and spatial merging at both latent and residual levels, PersonaHOI preserves personalized facial details while ensuring interactive non-facial regions. Experiments, validated by a novel interaction alignment metric, demonstrate the superior realism and scalability of PersonaHOI, establishing a new standard for practical personalized face with HOI generation. Code is available at [here](https://github.com/JoyHuYY1412/PersonaHOI).
21
+
22
+ ![Image 1: [Uncaptioned image]](https://arxiv.org/html/2501.05823v1/x1.png)
23
+
24
+ Figure 1: Examples of Personalized Face with Human-Object Interaction (HOI) Generation. We present PersonaHOI, a training- and tuning-free framework built on existing diffusion models. Using a single reference image and diverse HOI prompts, PersonaHOI generates identity-consistent human-object interactions compared to FastComposer[[34](https://arxiv.org/html/2501.05823v1#bib.bib34)]. PersonaHOI can further seamlessly integrate varied contexts, styles, accessories, and multi-person scenarios, ensuring scalability and practicality for real-world applications.
25
+
26
+ 1 1 footnotetext: Equal Contribution.![Image 2: Refer to caption](https://arxiv.org/html/2501.05823v1/x2.png)
27
+
28
+ Figure 2: (a) The spatial layout of StableDiffusion guides PersonaHOI to generate personalized content with coherent human-object interactions (HOI). (b) Analysis of identity injection timing in PFD models. We use FastComposer[[34](https://arxiv.org/html/2501.05823v1#bib.bib34)] for diffusion model generation. Injecting face representation at the start of image generation preserves facial details but lacks coherent HOI, while delayed injection continuously deviates from the original identity, resulting in random human features and meaningless human-object interactions.
29
+
30
+ 1 Introduction
31
+ --------------
32
+
33
+ Personalized face generation has seen increasing public interest and demand in user-specified digital content creation. Current learning-based personalized face diffusion (PFD) models[[34](https://arxiv.org/html/2501.05823v1#bib.bib34), [17](https://arxiv.org/html/2501.05823v1#bib.bib17), [20](https://arxiv.org/html/2501.05823v1#bib.bib20), [29](https://arxiv.org/html/2501.05823v1#bib.bib29)], trained on large-scale face-centric datasets, can incorporate a single user-provided reference image into a text-to-image generation process, enabling the rapid creation of images that depict specific subjects in diverse scenes, outfits, and styles within seconds.
34
+
35
+ Although these methods perform well in simpler scenarios, they often struggle with generating full-body depictions involving detailed human-object interactions (HOI). As illustrated in Figure[1](https://arxiv.org/html/2501.05823v1#S0.F1 "Figure 1 ‣ PersonaHOI: Effortlessly Improving Personalized Face with Human-Object Interaction Generation"), the generated images of FastComposer[[34](https://arxiv.org/html/2501.05823v1#bib.bib34)] show missing objects or body parts, resulting in portrait-focused outputs without HOI information. This limitation compromises the overall realism and practical utility of the generated content, especially in immersive or interactive applications.
36
+
37
+ To diagnose the limitations of PFD models, we examine the results of PFD models by varying the timing of identity injection during the generation process. As seen in the first and last columns of Figure[2](https://arxiv.org/html/2501.05823v1#S0.F2 "Figure 2 ‣ PersonaHOI: Effortlessly Improving Personalized Face with Human-Object Interaction Generation")(b), early injection effectively preserves facial details from the reference image, but relying solely on text input (last column) results in incoherent HOI outputs. This analysis highlights that the core challenge lies not in maintaining identity fidelity but in generating coherent body movements and interactions driven by text prompts. Notably, pre-trained models used for PFD, such as StableDiffusion[[24](https://arxiv.org/html/2501.05823v1#bib.bib24)], can generate satisfactory HOI images due to their extensive and diverse training data. This observation implies that the fine-tuning process on face-centric datasets of PFD models diminishes their ability to follow complex HOI text prompts. Therefore, to advance current PFD methods in HOI tasks, reintroducing the capability to leverage text prompts for natural full-body interactions is essential.
38
+
39
+ Motivated by this need, we propose a straightforward yet powerful approach: leveraging the generative capabilities of pre-trained diffusion models, such as StableDiffusion (SD), to augment existing PFD frameworks. Our method, PersonaHOI, integrates the strengths of SD without requiring additional training or fine-tuning, thereby restoring the ability to generate realistic, text-driven full-body HOI while preserving identity-specific details from the reference image.
40
+
41
+ Central to this integration, we incorporate an additional SD branch that aligns identity-specific details from PFD models with the HOI layouts generated by SD. A head segmentation mask derived from the SD output guides the merge of non-facial components from the SD branch with the facial details from the PFD branch. Specifically, in the PFD model branch, we introduce a Cross-Attention Constraint to prevent the overemphasis of identity features across the entire image, ensuring these details are confined to the facial region. To integrate HOI representations from SD, we implement Latent Merge and Residual Merge, merging identity and contextual features at both the latent representation level and via skip connections within the U-Net architecture. This multi-level merging strategy ensures that the generated images maintain realistic human-object interactions while preserving personalized facial details, resulting in cohesive, interaction-rich outputs.
42
+
43
+ To evaluate the effectiveness of our method in HOI scenarios, we propose a novel “interaction alignment” metric. These metrics leverage HOI detectors to measure alignment between generated images and text prompts, objectively assessing interaction realism. Our contributions can be summarized as follows:
44
+
45
+ * ∙∙\bullet∙We present PersonaHOI, a dual-path architecture that integrates general and personalized diffusion models. Our framework generates realistic human-object interactions with specific identities, without requiring additional training or test-time tuning.
46
+ * ∙∙\bullet∙Our method introduces key components, including cross-attention constraints and a combination of latent and residual merge strategies. These innovations enable effective feature integration, ensuring that generated images maintain both interaction coherence and identity fidelity.
47
+ * ∙∙\bullet∙We design a novel evaluation metric for human-object interaction. Our experiments show significant improvements over state-of-the-art face personalization techniques in terms of interaction realism across models, emphasizing the scalability and robustness of our approach.
48
+
49
+ 2 Related Works
50
+ ---------------
51
+
52
+ Personalized Face Generation Diffusion Models. Personalized face generation aims to create identity-consistent images of individuals from limited reference data[[43](https://arxiv.org/html/2501.05823v1#bib.bib43), [32](https://arxiv.org/html/2501.05823v1#bib.bib32), [36](https://arxiv.org/html/2501.05823v1#bib.bib36)]. Given pretrained general image generation models[[24](https://arxiv.org/html/2501.05823v1#bib.bib24)], traditional optimization-based approaches[[25](https://arxiv.org/html/2501.05823v1#bib.bib25), [6](https://arxiv.org/html/2501.05823v1#bib.bib6), [26](https://arxiv.org/html/2501.05823v1#bib.bib26)] refine model parameters during inference for each individual but are computationally expensive, making them unsuitable for real-time applications. In contrast, learning-based approaches leverage human-centric datasets[[18](https://arxiv.org/html/2501.05823v1#bib.bib18), [14](https://arxiv.org/html/2501.05823v1#bib.bib14), [19](https://arxiv.org/html/2501.05823v1#bib.bib19), [1](https://arxiv.org/html/2501.05823v1#bib.bib1)] to train robust models capable of varying identity inputs without requiring test-time fine-tuning. Facial representations, extracted by models like CLIP[[22](https://arxiv.org/html/2501.05823v1#bib.bib22)] or other face recognition model[[33](https://arxiv.org/html/2501.05823v1#bib.bib33), [4](https://arxiv.org/html/2501.05823v1#bib.bib4), [15](https://arxiv.org/html/2501.05823v1#bib.bib15)], are often fused with[[34](https://arxiv.org/html/2501.05823v1#bib.bib34), [20](https://arxiv.org/html/2501.05823v1#bib.bib20), [17](https://arxiv.org/html/2501.05823v1#bib.bib17)] or replace[[30](https://arxiv.org/html/2501.05823v1#bib.bib30)] textual embeddings of specific words, such as “person”, to provide identity conditions for diffusion models and enhance contextual relevance[[29](https://arxiv.org/html/2501.05823v1#bib.bib29), [4](https://arxiv.org/html/2501.05823v1#bib.bib4)]. In IP-Adapter[[37](https://arxiv.org/html/2501.05823v1#bib.bib37)], facial features are separately processed through decoupled cross-attention modules. Incorporating human masks as prior, methods can clean the background during data processing; alternatively, masks can be leveraged to construct[[3](https://arxiv.org/html/2501.05823v1#bib.bib3), [34](https://arxiv.org/html/2501.05823v1#bib.bib34), [17](https://arxiv.org/html/2501.05823v1#bib.bib17), [20](https://arxiv.org/html/2501.05823v1#bib.bib20)] and adjust[[11](https://arxiv.org/html/2501.05823v1#bib.bib11)] loss functions during training. In our work, we extend face personalization to complex human-object interactions (HOI), achieving a higher level of personalization and adaptability for diverse generative tasks.
53
+
54
+ Human-Object Interaction in Diffusion Models. HOI detection aims to locate human-object pairs and categorize their interactions as triplets, such as (person, playing, football)[[40](https://arxiv.org/html/2501.05823v1#bib.bib40), [5](https://arxiv.org/html/2501.05823v1#bib.bib5), [38](https://arxiv.org/html/2501.05823v1#bib.bib38), [39](https://arxiv.org/html/2501.05823v1#bib.bib39)], while HOI image synthesis remains relatively under-explored. InteractGAN[[7](https://arxiv.org/html/2501.05823v1#bib.bib7)] generates HOI images using human pose templates and reference images, but its reliance on pose-template pools and object references limits flexibility. Recent diffusion-based approaches tackle HOI generation by incorporating layout controls such as bounding boxes[[16](https://arxiv.org/html/2501.05823v1#bib.bib16), [9](https://arxiv.org/html/2501.05823v1#bib.bib9), [13](https://arxiv.org/html/2501.05823v1#bib.bib13)] or human poses[[42](https://arxiv.org/html/2501.05823v1#bib.bib42), [16](https://arxiv.org/html/2501.05823v1#bib.bib16)], or by leveraging in-context samples with similar interactions[[10](https://arxiv.org/html/2501.05823v1#bib.bib10), [44](https://arxiv.org/html/2501.05823v1#bib.bib44)]. Additionally, ReCorD[[13](https://arxiv.org/html/2501.05823v1#bib.bib13)] integrates Latent Diffusion Models with Visual Language Models, introducing modules for reasoning and correcting interactions to enhance cross-modal alignment. In this work, we rely on the standard text-to-image diffusion model (i.e., StableDiffusion[[24](https://arxiv.org/html/2501.05823v1#bib.bib24)]) for HOI generation, and fuse the generation process with personalized face diffusion models to generation identity-preserving HOI images.
55
+
56
+ ![Image 3: Refer to caption](https://arxiv.org/html/2501.05823v1/x3.png)
57
+
58
+ Figure 3: Overview of Our Proposed Framework, PersonaHOI. The architecture integrates a personalized face diffusion (PFD) model with an additional StableDiffusion (SD) branch. First, SD generates an image (I S⁢D subscript 𝐼 𝑆 𝐷 I_{SD}italic_I start_POSTSUBSCRIPT italic_S italic_D end_POSTSUBSCRIPT) from a text prompt and noisy latent representation (z T subscript 𝑧 𝑇 z_{T}italic_z start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT), which is decoded and segmented to produce a head mask. Next, SD and PFD run in parallel from the same z T subscript 𝑧 𝑇 z_{T}italic_z start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT. At every timestep t 𝑡 t italic_t, the head mask guides the Cross-Attention Constraint in PFD and merging modules (Latent Merge and Residual Merge) to merge interaction-relevant features from SD with identity-specific details from PFD. Iteratively, this process introduces HOI context to personalized face generation in a training&tuning-free manner.
59
+
60
+ 3 Preliminary
61
+ -------------
62
+
63
+ Understanding the fundamentals of StableDiffusion (SD) is essential for leveraging its generative capabilities within our PFG framework. SD is a latent diffusion model designed to generate high-quality images based on text or image prompts. It operates in a latent space, where an input image is encoded into a latent representation 𝐳 0 subscript 𝐳 0\mathbf{z}_{0}bold_z start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT using a pre-trained variational autoencoder (VAE) encoder, reducing the image dimensionality for efficient processing.
64
+
65
+ During the generation phase, the SD model begins with an initial noisy latent representation 𝐳 T subscript 𝐳 𝑇\mathbf{z}_{T}bold_z start_POSTSUBSCRIPT italic_T end_POSTSUBSCRIPT and progressively denoises it over T 𝑇 T italic_T timesteps to reconstruct 𝐳 0 subscript 𝐳 0\mathbf{z}_{0}bold_z start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. This denoised latent is then decoded by the VAE decoder to produce the final image. At each timestep t 𝑡 t italic_t, the U-Net architecture θ 𝜃\theta italic_θ predicts noise ϵ⁢(𝐳 t,t,𝐂)italic-ϵ subscript 𝐳 𝑡 𝑡 𝐂\epsilon(\mathbf{z}_{t},t,\mathbf{C})italic_ϵ ( bold_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_t , bold_C ), where 𝐂 𝐂\mathbf{C}bold_C represents the conditioning inputs (text or image embeddings). The iterative denoising process follows:
66
+
67
+ 𝐳 t−1=Denoise⁢(𝐳 t,ϵ⁢(𝐳 t,t,𝐂);θ)subscript 𝐳 𝑡 1 Denoise subscript 𝐳 𝑡 italic-ϵ subscript 𝐳 𝑡 𝑡 𝐂 𝜃\mathbf{z}_{t-1}=\text{Denoise}(\mathbf{z}_{t},\epsilon(\mathbf{z}_{t},t,% \mathbf{C});\theta)bold_z start_POSTSUBSCRIPT italic_t - 1 end_POSTSUBSCRIPT = Denoise ( bold_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_ϵ ( bold_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT , italic_t , bold_C ) ; italic_θ )(1)
68
+
69
+ The U-Net architecture comprises an encoder-decoder structure with skip connections that transfer features between the encoder and decoder. Within this architecture, SD employs a cross-attention mechanism to incorporate conditioning inputs effectively during denoising. This mechanism projects latent representations and conditioning inputs into query, key, and value spaces:
70
+
71
+ {𝐐=𝐖 q⁢𝐳,𝐊=𝐖 k⁢𝐂,𝐕=𝐖 v⁢𝐂;𝐀=Softmax⁢(𝐐𝐊⊤d),𝐳 attn=𝐀𝐕.\left\{\begin{aligned} \mathbf{Q}&=\mathbf{W}_{q}\mathbf{z},\mathbf{K}=\mathbf% {W}_{k}\mathbf{C},\mathbf{V}=\mathbf{W}_{v}\mathbf{C};\\ \mathbf{A}&=\text{Softmax}\left(\frac{\mathbf{Q}\mathbf{K}^{\top}}{\sqrt{d}}% \right),\mathbf{z}_{\text{attn}}=\mathbf{A}\mathbf{V}.\end{aligned}\right.{ start_ROW start_CELL bold_Q end_CELL start_CELL = bold_W start_POSTSUBSCRIPT italic_q end_POSTSUBSCRIPT bold_z , bold_K = bold_W start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT bold_C , bold_V = bold_W start_POSTSUBSCRIPT italic_v end_POSTSUBSCRIPT bold_C ; end_CELL end_ROW start_ROW start_CELL bold_A end_CELL start_CELL = Softmax ( divide start_ARG bold_QK start_POSTSUPERSCRIPT ⊤ end_POSTSUPERSCRIPT end_ARG start_ARG square-root start_ARG italic_d end_ARG end_ARG ) , bold_z start_POSTSUBSCRIPT attn end_POSTSUBSCRIPT = bold_AV . end_CELL end_ROW(2)
72
+
73
+ where 𝐀 𝐀\mathbf{A}bold_A is the attention map that ensures relevant conditioning tokens guide the denoising at each generation step. This cross-attention mechanism is critical for aligning the generated image with the input prompts, enabling consistency and fidelity in the final output.
74
+
75
+ 4 Method
76
+ --------
77
+
78
+ ### 4.1 Overall Architecture
79
+
80
+ As illustrated in Figure[3](https://arxiv.org/html/2501.05823v1#S2.F3 "Figure 3 ‣ 2 Related Works ‣ PersonaHOI: Effortlessly Improving Personalized Face with Human-Object Interaction Generation"), PersonaHOI augments the PFD model by integrating an additional StableDiffusion (SD) branch, enhancing its capacity to generate personalized images that capture complex human-object interactions (HOI) in a training-free manner. Given an input comprising a user-provided reference image and a text prompt specifying the interaction (e.g., “a person kicking football”), our framework produces a cohesive output image that retains the subject’s identity while accurately depicting the specified HOI.
81
+
82
+ To effectively merge identity from PFD and interaction features from SD, we introduce three core strategies: Cross-Attention Constraint, which regulates attention to reference image features within the PFD model (detailed in Sec.[4.2](https://arxiv.org/html/2501.05823v1#S4.SS2 "4.2 Cross-Attention Constraint (CAC) ‣ 4 Method ‣ PersonaHOI: Effortlessly Improving Personalized Face with Human-Object Interaction Generation")); Latent Merge, which combines features in the latent space at each generation timestep (Sec.[4.3](https://arxiv.org/html/2501.05823v1#S4.SS3 "4.3 Latent Merge (LM) ‣ 4 Method ‣ PersonaHOI: Effortlessly Improving Personalized Face with Human-Object Interaction Generation")); and Residual Merge, which integrates identity details through skip connections in the U-Net architecture (Sec.[4.4](https://arxiv.org/html/2501.05823v1#S4.SS4 "4.4 Residual Merge (RM) ‣ 4 Method ‣ PersonaHOI: Effortlessly Improving Personalized Face with Human-Object Interaction Generation")). Together, these strategies enable PersonaHOI to align generated content with interaction-driven text prompts while maintaining identity fidelity. Furthermore, this design allows PersonaHOI adaptable to various PFD models without the need for model retraining or test-time tuning.
83
+
84
+ ### 4.2 Cross-Attention Constraint (CAC)
85
+
86
+ In the cross-attention layers of the PFD model, conditioning inputs 𝐂∈ℝ N×D 𝐂 superscript ℝ 𝑁 𝐷\mathbf{C}\in\mathbb{R}^{N\times D}bold_C ∈ blackboard_R start_POSTSUPERSCRIPT italic_N × italic_D end_POSTSUPERSCRIPT include both text and image embeddings to guide the generation process, where N 𝑁 N italic_N is the number of tokens, and D 𝐷 D italic_D is the embedding dimension. Following established PFD model approaches[[34](https://arxiv.org/html/2501.05823v1#bib.bib34), [20](https://arxiv.org/html/2501.05823v1#bib.bib20), [17](https://arxiv.org/html/2501.05823v1#bib.bib17)], the n i⁢m⁢g subscript 𝑛 𝑖 𝑚 𝑔 n_{img}italic_n start_POSTSUBSCRIPT italic_i italic_m italic_g end_POSTSUBSCRIPT-th token in 𝐂 𝐂\mathbf{C}bold_C incorporates the image embedding, encapsulating identity-specific details essential for facial preservation, while the remaining N−1 𝑁 1 N-1 italic_N - 1 tokens correspond to text embeddings. In standard PFD models, attention to the n i⁢m⁢g subscript 𝑛 𝑖 𝑚 𝑔 n_{img}italic_n start_POSTSUBSCRIPT italic_i italic_m italic_g end_POSTSUBSCRIPT-th token often spreads across the image, overemphasizing facial features thus decreasing spatial capacity for representing body movements and object interactions.
87
+
88
+ To address this, we introduce Cross-Attention Constraint(CAC) within the cross-attention layers of the PFD model. CAC restricts identity features to specific facial regions, ensuring sufficient spatial capacity for body and object interactions. Different from the attention localization loss used in training[[34](https://arxiv.org/html/2501.05823v1#bib.bib34), [20](https://arxiv.org/html/2501.05823v1#bib.bib20)], our CAC utilizes fine-grained head mask derived from the SD-generated image I S⁢D subscript 𝐼 𝑆 𝐷 I_{SD}italic_I start_POSTSUBSCRIPT italic_S italic_D end_POSTSUBSCRIPT in image generation process, ensuring the attention constraint in PFD is consistent with the HOI layout generated by SD. The head mask is segmented by an off-the-shelf head segmentor[[23](https://arxiv.org/html/2501.05823v1#bib.bib23)], then resized to match the spatial dimensions (H×W)𝐻 𝑊(H\times W)( italic_H × italic_W ) of the latent representation 𝐳 𝐳\mathbf{z}bold_z, denoted as 𝐌 head∈[0,1]H×W superscript 𝐌 head superscript 0 1 𝐻 𝑊\mathbf{M}^{\text{head}}\in[0,1]^{H\times W}bold_M start_POSTSUPERSCRIPT head end_POSTSUPERSCRIPT ∈ [ 0 , 1 ] start_POSTSUPERSCRIPT italic_H × italic_W end_POSTSUPERSCRIPT (1 for the head region and 0 for other regions). We apply 𝐌 head superscript 𝐌 head\mathbf{M}^{\text{head}}bold_M start_POSTSUPERSCRIPT head end_POSTSUPERSCRIPT to the attention map 𝐀∈ℝ(H×W)×N 𝐀 superscript ℝ 𝐻 𝑊 𝑁\mathbf{A}\in\mathbb{R}^{(H\times W)\times N}bold_A ∈ blackboard_R start_POSTSUPERSCRIPT ( italic_H × italic_W ) × italic_N end_POSTSUPERSCRIPT as follows:
89
+
90
+ 𝐌 i∈[0,1,…,N−1]C⁢A⁢C subscript superscript 𝐌 𝐶 𝐴 𝐶 𝑖 0 1…𝑁 1\displaystyle\mathbf{M}^{CAC}_{i\in[0,1,\dots,N-1]}bold_M start_POSTSUPERSCRIPT italic_C italic_A italic_C end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i ∈ [ 0 , 1 , … , italic_N - 1 ] end_POSTSUBSCRIPT={flatten⁢(𝐌 h⁢e⁢a⁢d),if⁢i=img,1,otherwise absent cases flatten superscript 𝐌 ℎ 𝑒 𝑎 𝑑 if 𝑖 img 1 otherwise\displaystyle=\begin{cases}\text{flatten}(\mathbf{M}^{head}),&\text{if }i=% \textit{img},\\ 1,&\text{otherwise}\end{cases}= { start_ROW start_CELL flatten ( bold_M start_POSTSUPERSCRIPT italic_h italic_e italic_a italic_d end_POSTSUPERSCRIPT ) , end_CELL start_CELL if italic_i = img , end_CELL end_ROW start_ROW start_CELL 1 , end_CELL start_CELL otherwise end_CELL end_ROW(3)
91
+ 𝐀 C⁢A⁢C superscript 𝐀 𝐶 𝐴 𝐶\displaystyle\mathbf{A}^{CAC}bold_A start_POSTSUPERSCRIPT italic_C italic_A italic_C end_POSTSUPERSCRIPT=𝐀⊙(𝐌 C⁢A⁢C)⊺.absent direct-product 𝐀 superscript superscript 𝐌 𝐶 𝐴 𝐶⊺\displaystyle=\mathbf{A}\odot(\mathbf{M}^{CAC})^{\intercal}.= bold_A ⊙ ( bold_M start_POSTSUPERSCRIPT italic_C italic_A italic_C end_POSTSUPERSCRIPT ) start_POSTSUPERSCRIPT ⊺ end_POSTSUPERSCRIPT .
92
+
93
+ 𝐌 C⁢A⁢C superscript 𝐌 𝐶 𝐴 𝐶\mathbf{M}^{CAC}bold_M start_POSTSUPERSCRIPT italic_C italic_A italic_C end_POSTSUPERSCRIPT specifically modifies the attention weights for the n i⁢m⁢g subscript 𝑛 𝑖 𝑚 𝑔 n_{img}italic_n start_POSTSUBSCRIPT italic_i italic_m italic_g end_POSTSUBSCRIPT token by setting the weights to 0 in regions outside the head, preventing the influence of human identity on non-facial areas. Since the head mask is derived from SD-generated outputs, CAC also enhances facial layout coherence with SD. When further combined with Latent and Residual Merge, CAC helps blend facial features from PFD and non-facial interaction features from SD without mutual interference.
94
+
95
+ ### 4.3 Latent Merge (LM)
96
+
97
+ In this section, we introduce Latent Merge, which directly merges the latent representation within U-Net from PFD and SD models. After applying Cross-Attention Constraint (CAC) in Sec.[4.2](https://arxiv.org/html/2501.05823v1#S4.SS2 "4.2 Cross-Attention Constraint (CAC) ‣ 4 Method ‣ PersonaHOI: Effortlessly Improving Personalized Face with Human-Object Interaction Generation"), the facial region generated by the PFD model becomes spatially aligned with the corresponding region in the broader HOI layout produced by the SD model. This alignment facilitates spatial merge, blending facial identity details from PFD with interaction contexts from SD in latent space.
98
+
99
+ At each diffusion timestep t 𝑡 t italic_t of PFD and SD, we implement the latent space merging with 𝐌 h⁢e⁢a⁢d superscript 𝐌 ℎ 𝑒 𝑎 𝑑\mathbf{M}^{head}bold_M start_POSTSUPERSCRIPT italic_h italic_e italic_a italic_d end_POSTSUPERSCRIPT as follows:
100
+
101
+ 𝐳 t=𝐌 h⁢e⁢a⁢d⊙𝐳 t PFD+(1−𝐌 h⁢e⁢a⁢d)⊙𝐳 t SD.subscript 𝐳 𝑡 direct-product superscript 𝐌 ℎ 𝑒 𝑎 𝑑 superscript subscript 𝐳 𝑡 PFD direct-product 1 superscript 𝐌 ℎ 𝑒 𝑎 𝑑 superscript subscript 𝐳 𝑡 SD\mathbf{z}_{t}=\mathbf{M}^{head}\odot\mathbf{z}_{t}^{\text{PFD}}+(1-\mathbf{M}% ^{head})\odot\mathbf{z}_{t}^{\text{SD}}.bold_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT = bold_M start_POSTSUPERSCRIPT italic_h italic_e italic_a italic_d end_POSTSUPERSCRIPT ⊙ bold_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT PFD end_POSTSUPERSCRIPT + ( 1 - bold_M start_POSTSUPERSCRIPT italic_h italic_e italic_a italic_d end_POSTSUPERSCRIPT ) ⊙ bold_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT SD end_POSTSUPERSCRIPT .(4)
102
+
103
+ Here, 𝐳⁢t PFD 𝐳 superscript 𝑡 PFD\mathbf{z}{t}^{\text{PFD}}bold_z italic_t start_POSTSUPERSCRIPT PFD end_POSTSUPERSCRIPT and 𝐳 t SD superscript subscript 𝐳 𝑡 SD\mathbf{z}_{t}^{\text{SD}}bold_z start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT start_POSTSUPERSCRIPT SD end_POSTSUPERSCRIPT represents the latent features from PFD and SD at timestep t 𝑡 t italic_t, respectively. The merged latent 𝐳⁢t 𝐳 𝑡\mathbf{z}{t}bold_z italic_t then serves as input for both SD and PFD models in the next denoising timestep t−1 𝑡 1 t-1 italic_t - 1. This merging strategy allows identity-specific details to be retained within facial regions, while non-facial regions maintain the coherent interaction context generated by SD.
104
+
105
+ ![Image 4: Refer to caption](https://arxiv.org/html/2501.05823v1/x4.png)
106
+
107
+ Figure 4: Illustration of Residual Merge. In each residual layer, Residual Merge operates within the U-Net skip connections, utilizing a head mask to guide the integration of high-frequency identity details from PFD residuals and low-frequency interaction layouts from SD residuals. The merged residuals are then concatenated to the corresponding bottleneck features from PFD.
108
+
109
+ Table 1: Comparison of Our Method with Baseline Approaches on HOI-Specific Personalized Face Generation. StableDiffusion serves as the text-only baseline without subject conditioning. PersonaHOI seamlessly incorporates existing Personalized Face Diffusion models (FastComposer[[34](https://arxiv.org/html/2501.05823v1#bib.bib34)], IP-Adapter[[37](https://arxiv.org/html/2501.05823v1#bib.bib37)], PhotoMaker[[17](https://arxiv.org/html/2501.05823v1#bib.bib17)]) with their corresponding StableDiffusion architectures. We bold the higher number for each pair of comparison.
110
+
111
+ ### 4.4 Residual Merge (RM)
112
+
113
+ While Latent Merge enables interactions between the SD and PFD at the latent level, it lacks integration at the intermediate feature stage within the U-Net. To address this, we introduce Residual Merge, which enhances feature integration within the skip connections. In the U-Net, skip connections transmit residual features containing rich detail from the encoder to the decoder, significantly impacting the content and quality of the generated images[[31](https://arxiv.org/html/2501.05823v1#bib.bib31), [12](https://arxiv.org/html/2501.05823v1#bib.bib12)].
114
+
115
+ Our Residual Merge is applied to each skip connection layer l 𝑙 l italic_l (l=1,…,L 𝑙 1…𝐿 l=1,\dots,L italic_l = 1 , … , italic_L), as shown in Figure[4](https://arxiv.org/html/2501.05823v1#S4.F4 "Figure 4 ‣ 4.3 Latent Merge (LM) ‣ 4 Method ‣ PersonaHOI: Effortlessly Improving Personalized Face with Human-Object Interaction Generation"). First, residual features are extracted from both the PFD and SD U-Net backbones. A low-pass filter and high-pass filter is applied to the SD and PFD residuals, respectively. Low-pass and high-pass filters are applied to these residuals to balance global coherence and local precision: low-pass filtering SD residuals suppresses high-frequency noise to retain the broader scene layout, while high-pass filtering PFD residuals emphasizes fine-grained identity-specific details. To achieve precise integration between facial and non-facial regions, we apply the head segmentation mask 𝐌 𝐡𝐞𝐚𝐝 superscript 𝐌 𝐡𝐞𝐚𝐝\mathbf{M^{head}}bold_M start_POSTSUPERSCRIPT bold_head end_POSTSUPERSCRIPT, resized to match the residual resolution at layer l 𝑙 l italic_l, denoted as 𝐌 R l subscript superscript 𝐌 𝑙 R\mathbf{M}^{l}_{\text{R}}bold_M start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT start_POSTSUBSCRIPT R end_POSTSUBSCRIPT. The fusion of residual features from both paths is then implemented for each layer l 𝑙 l italic_l as follows:
116
+
117
+ 𝐑 merged l=𝐌 R l⊙HP⁢(𝐑 PFD l)+(1−𝐌 R l)⊙LP⁢(𝐑 SD l),subscript superscript 𝐑 𝑙 merged direct-product subscript superscript 𝐌 𝑙 𝑅 HP subscript superscript 𝐑 𝑙 PFD direct-product 1 subscript superscript 𝐌 𝑙 𝑅 LP subscript superscript 𝐑 𝑙 SD\mathbf{R}^{l}_{\text{merged}}=\mathbf{M}^{l}_{R}\odot\text{HP}(\mathbf{R}^{l}% _{\text{PFD}})+(1-\mathbf{M}^{l}_{R})\odot\text{LP}(\mathbf{R}^{l}_{\text{SD}}),bold_R start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT start_POSTSUBSCRIPT merged end_POSTSUBSCRIPT = bold_M start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT ⊙ HP ( bold_R start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT start_POSTSUBSCRIPT PFD end_POSTSUBSCRIPT ) + ( 1 - bold_M start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_R end_POSTSUBSCRIPT ) ⊙ LP ( bold_R start_POSTSUPERSCRIPT italic_l end_POSTSUPERSCRIPT start_POSTSUBSCRIPT SD end_POSTSUBSCRIPT ) ,(5)
118
+
119
+ where HP⁢(⋅)HP⋅\text{HP}(\cdot)HP ( ⋅ ) and LP⁢(⋅)LP⋅\text{LP}(\cdot)LP ( ⋅ ) denote the high pass and low pass filter, 𝐑 PFD subscript 𝐑 PFD\mathbf{R}_{\text{PFD}}bold_R start_POSTSUBSCRIPT PFD end_POSTSUBSCRIPT and 𝐑 SD subscript 𝐑 SD\mathbf{R}_{\text{SD}}bold_R start_POSTSUBSCRIPT SD end_POSTSUBSCRIPT denote the residual features from the PFD and SD paths, respectively. The merged residual 𝐑 merged subscript 𝐑 merged\mathbf{R}_{\text{merged}}bold_R start_POSTSUBSCRIPT merged end_POSTSUBSCRIPT is then concatenated with the bottom-up features in the PFD U-Net for further processing.
120
+
121
+ This Residual Merge strategy enables fine-grained feature integration within the U-Net. Through this mechanism, facial regions retain high-frequency identity details from PFD while non-facial regions incorporate interaction layouts from SD, ensuring cohesive and contextually aligned outputs.
122
+
123
+ 5 Experiments
124
+ -------------
125
+
126
+ ![Image 5: Refer to caption](https://arxiv.org/html/2501.05823v1/x5.png)
127
+
128
+ Figure 5: Qualitative Examples of PersonaHOI and Baseline Models. Comparison of baseline models (FastComposer[[34](https://arxiv.org/html/2501.05823v1#bib.bib34)], IP-Adapter[[37](https://arxiv.org/html/2501.05823v1#bib.bib37)], PhotoMaker[[17](https://arxiv.org/html/2501.05823v1#bib.bib17)]) and their PersonaHOI-enhanced results for diverse human-object interaction prompts.
129
+
130
+ ### 5.1 Setup
131
+
132
+ Evaluation Data. We use two test sets for evaluation:
133
+
134
+ 1) General Personalized Face Generation: Following prior studies[[34](https://arxiv.org/html/2501.05823v1#bib.bib34), [20](https://arxiv.org/html/2501.05823v1#bib.bib20)], we compile a benchmark with 15 reference subjects, each paired with 40 prompts spanning diverse scenarios like context, stylization, accessory, and actions. These prompts aim to assess both identity retention and the model’s ability to adapt to various text-guided settings. Full prompt details are provided in the appendix.
135
+
136
+ 2) HOI-Focused Personalized Face Generation: To test complex human-object interactions (HOI), we use interaction labels from the widely-used V-COCO[[8](https://arxiv.org/html/2501.05823v1#bib.bib8)] dataset. V-COCO provides diverse HOI scenarios, enabling robust evaluation of interaction fidelity. We construct 30 prompts in the “subject + interaction + object” format (e.g., “woman carrying a handbag”, “man kicking a ball”) using the same 15 reference subjects as in 1).
137
+
138
+ Metrics. We employ the following evaluation metrics:
139
+
140
+ 1) Identity Preservation: Measures how well the generated images retain the identity of the reference image. Face detection is performed using MTCNN[[41](https://arxiv.org/html/2501.05823v1#bib.bib41)], and identity similarity is calculated with FaceNet[[27](https://arxiv.org/html/2501.05823v1#bib.bib27)], following[[34](https://arxiv.org/html/2501.05823v1#bib.bib34), [20](https://arxiv.org/html/2501.05823v1#bib.bib20)].
141
+
142
+ 2) Prompt Consistency: Evaluates text-image alignment using CLIP-based[[22](https://arxiv.org/html/2501.05823v1#bib.bib22)] scores, where higher values indicate better prompt adherence.
143
+
144
+ 3) HOI Alignment (Ours): Assesses how well the generated images depict specified human-object interactions. We use UPT[[40](https://arxiv.org/html/2501.05823v1#bib.bib40)] as the HOI detector to evaluate the score of the specific “subject + interaction + object” triplet in generated images.
145
+
146
+ Compared Methods and Implementation Details. We compare our method with state-of-the-art learning-based personalized face diffusion models, including FastComposer[[34](https://arxiv.org/html/2501.05823v1#bib.bib34)], IP-Adapter[[37](https://arxiv.org/html/2501.05823v1#bib.bib37)], and PhotoMaker[[17](https://arxiv.org/html/2501.05823v1#bib.bib17)]. Our PersonaHOI used their pre-trained diffusion backbones as the additional branch, i.e., StableDiffusion v1.5 for FastComposer and StableDiffusion XL for PhotoMaker, with output resolutions set to 512×512 512 512 512\times 512 512 × 512 and 1024×1024 1024 1024 1024\times 1024 1024 × 1024, respectively. Head segmentation for SD generated image is performed with DensePose[[23](https://arxiv.org/html/2501.05823v1#bib.bib23)] due to its robust segmentation accuracy. In our generation process, no additional training or test-time tuning is applied to any compared methods.
147
+
148
+ Table 2: Comparison of Our Method with FastComposer[[34](https://arxiv.org/html/2501.05823v1#bib.bib34)] on General Personalized Face Generation. We compare across four categories of text prompts including Accessory, Style, Action, and Context, following[[34](https://arxiv.org/html/2501.05823v1#bib.bib34), [20](https://arxiv.org/html/2501.05823v1#bib.bib20)]. Results are formatted as “ Identity Preservation (%) / Prompt Consistency (%)” and we bold the higher number for each pair of comparison.
149
+
150
+ ### 5.2 Personalized Face with HOI Generation
151
+
152
+ Qualitative Comparison. Figure[5](https://arxiv.org/html/2501.05823v1#S5.F5 "Figure 5 ‣ 5 Experiments ‣ PersonaHOI: Effortlessly Improving Personalized Face with Human-Object Interaction Generation") illustrates qualitative comparisons between baseline models (FastComposer[[34](https://arxiv.org/html/2501.05823v1#bib.bib34)], IP-Adapter[[37](https://arxiv.org/html/2501.05823v1#bib.bib37)], PhotoMaker[[17](https://arxiv.org/html/2501.05823v1#bib.bib17)]) and their PersonaHOI-enhanced counterparts for diverse human-object interaction prompts. Baseline methods often fail to produce meaningful interactions, leading to missing or poorly integrated objects and unnatural human-object dynamics. For instance, in the prompt “A man skateboarding with skateboard” (first row of Figure[5](https://arxiv.org/html/2501.05823v1#S5.F5 "Figure 5 ‣ 5 Experiments ‣ PersonaHOI: Effortlessly Improving Personalized Face with Human-Object Interaction Generation")), FastComposer[[34](https://arxiv.org/html/2501.05823v1#bib.bib34)] and PhotoMaker omit the skateboard entirely and IP-Adapter fails to generate the accurate interaction (“holding” the skateboard instead), resulting in inconsistent outputs with the text prompt. In contrast, PersonaHOI-enhanced models generate realistic skateboard placement and natural human-object dynamics. Moreover, PersonaHOI also ensures high-fidelity preservation of facial identity given in reference images.
153
+
154
+ Quantitative Comparison. Table[4](https://arxiv.org/html/2501.05823v1#S8.T4 "Table 4 ‣ 8.1 More Quantitative Results ‣ 8 More Results on General PFG ‣ PersonaHOI: Effortlessly Improving Personalized Face with Human-Object Interaction Generation") compares the performance of PersonaHOI across various baseline models. For FastComposer[[34](https://arxiv.org/html/2501.05823v1#bib.bib34)] and PhotoMaker[[17](https://arxiv.org/html/2501.05823v1#bib.bib17)], PersonaHOI achieves consistent improvements across all metrics. Notably, Interaction Alignment improves by 20.69% for FastComposer[[34](https://arxiv.org/html/2501.05823v1#bib.bib34)] and 19.24% for PhotoMaker[[17](https://arxiv.org/html/2501.05823v1#bib.bib17)], demonstrating superior human-object interaction coherence. Gains in Identity Preservation and Prompt Consistency further validate PersonaHOI’s ability to retain identity while enhancing text-image alignment. For IP-Adapter[[37](https://arxiv.org/html/2501.05823v1#bib.bib37)], a decrease in Identity Preservation (from 62.86% to 55.74%) is observed. This drop is attributed to IP-Adapter’s inherent bias toward generating face-centric images, which inflates identity scores at the cost of interaction fidelity. PersonaHOI mitigates this trade-off by significantly improving Interaction Alignment (+18.47%) and Prompt Consistency (+1.47%), balancing identity preservation with realistic interaction synthesis. Despite the decrease, the identity preservation score of 55.74% remains competitive for personalized face generation.
155
+
156
+ Overall, PersonaHOI bridges the gap between personalized face generation and interaction realism. Its training- and tuning-free design enables seamless integration into different SD-based personalized face diffusion models, providing scalability and adaptability across diverse architectures and application scenarios.
157
+
158
+ ### 5.3 Personalized Multi-Subject HOI Generation
159
+
160
+ Generating images with multiple subjects engaged in human-object interactions (HOI) presents significant challenges in preserving distinct identities and accurately depicting interactions. As shown in Figure[6](https://arxiv.org/html/2501.05823v1#S5.F6 "Figure 6 ‣ 5.3 Personalized Multi-Subject HOI Generation ‣ 5 Experiments ‣ PersonaHOI: Effortlessly Improving Personalized Face with Human-Object Interaction Generation"), our method enhances FastComposer[[34](https://arxiv.org/html/2501.05823v1#bib.bib34)] using its pre-trained SD v1.5 model, maintaining identity fidelity while generating coherent interactions. For instance, in the challenging “cycling together” (second line) scenario, PersonaHOI successfully preserves the unique identities of both subjects and generates two bicycles alongside a natural background. In contrast, FastComposer[[34](https://arxiv.org/html/2501.05823v1#bib.bib34)] produces two adjacent faces with minimal interaction details. These results highlight the potential of PersonaHOI in handling complex multi-subject HOI scenarios, ensuring both identity preservation and interaction realism.
161
+
162
+ ![Image 6: Refer to caption](https://arxiv.org/html/2501.05823v1/x6.png)
163
+
164
+ Figure 6: Qualitative results for multi-subject HOI generation. We compare generation outputs from SD v1.5[[24](https://arxiv.org/html/2501.05823v1#bib.bib24)], FastComposer[[34](https://arxiv.org/html/2501.05823v1#bib.bib34)], and our PersonaHOI based on FastComposer[[34](https://arxiv.org/html/2501.05823v1#bib.bib34)] with different multi-subject interaction prompts. PersonaHOI not only preserves distinct identities but also generates coherent human-object interactions that align with the HOI layout produced by SD v1.5.
165
+
166
+ ### 5.4 More Results
167
+
168
+ General Personalized Face Generation We evaluate PersonaHOI on the task of general personalized face generation[[34](https://arxiv.org/html/2501.05823v1#bib.bib34), [20](https://arxiv.org/html/2501.05823v1#bib.bib20)] across four categories of text prompts: Accessory, Style, Action, and Context. As shown in Table[2](https://arxiv.org/html/2501.05823v1#S5.T2 "Table 2 ‣ 5.1 Setup ‣ 5 Experiments ‣ PersonaHOI: Effortlessly Improving Personalized Face with Human-Object Interaction Generation"), the scores across all categories for the PersonaHOI-enhanced method are consistently comparable to or slightly better than the baseline FastComposer[[34](https://arxiv.org/html/2501.05823v1#bib.bib34)]. These results demonstrate that while our method is specifically designed for personalized HOI scenarios, it effectively preserves the generative capabilities of the base models and does not degrade their performance on general text prompts, showcasing its adaptability and robustness.
169
+
170
+ Effectiveness of Individual Component. We conducted an ablation study on FastComposer[[34](https://arxiv.org/html/2501.05823v1#bib.bib34)] to evaluate the contributions of the Cross-Attention Constraint(CAC), Latent Merge(LM), and Residual Merge(RM) in PersonaHOI. As shown in Table[3](https://arxiv.org/html/2501.05823v1#S5.T3 "Table 3 ‣ 5.4 More Results ‣ 5 Experiments ‣ PersonaHOI: Effortlessly Improving Personalized Face with Human-Object Interaction Generation"), the full model, integrating all three components, achieves the best performance across all metrics, underscoring the complementary roles of our proposed modules. Removing LM or RM results in significant declines in both Identity Preservation and Interaction Alignment, highlighting the critical role of these merging strategies. The absence of CAC has a relatively smaller effect on Interaction Alignment, but it substantially impacts Identity Preservation. This is because without CAC, faces generated by PFD fail to align with the spatial layout provided by SD, disrupting the coherence between PFD and SD branches and leading to diminished alignment and identity fidelity in facial regions. In summary, the ablation results confirm the distinct and critical functionalities of each module. CAC ensures precise spatial alignment of faces between PFD and SD, while LM and RM are indispensable for consistently merging non-facial HOI-relevant features from SD with facial identity details from PFD.
171
+
172
+ Low-Pass and High-Pass Filter Design. We validate the design choice of applying a low-pass filter to the SD branch and a high-pass filter to the PFD branch in Residual Merge. To this end, we compare six settings: direct replacement of PFD residuals with SD (Replace), merging without filters (NoFilter), and combinations of low-pass and high-pass filters (i.e., Low-Low, High-High, High-Low, Low-High) for SD and PFD, respectively. As shown in Figure[7](https://arxiv.org/html/2501.05823v1#S5.F7 "Figure 7 ‣ 5.4 More Results ‣ 5 Experiments ‣ PersonaHOI: Effortlessly Improving Personalized Face with Human-Object Interaction Generation"), direct replacement and no-filter approaches yield suboptimal results, emphasizing the importance of a balanced merging strategy. Among the configurations, the Low-High design achieves the best overall performance across all metrics, confirming the effectiveness of our Residual Merge design.
173
+
174
+ Table 3: Effect of Individual Components. We evaluate the contributions of Cross-Attention Constraint (CAC), Latent Merge (LM), and Residual Merge (RM) in PersonaHOI by selectively removing each of them. Experiments are conducted with FastComposer on HOI-specific personalized face generation. Red numbers denote the performance lower than FastComposer[[34](https://arxiv.org/html/2501.05823v1#bib.bib34)] baseline.
175
+
176
+ ![Image 7: Refer to caption](https://arxiv.org/html/2501.05823v1/x7.png)
177
+
178
+ Figure 7: Effect of Low-Pass and High-Pass Filters in Residual Merge. We evaluate six configurations: direct replacement (Replace), merge without filter (NoFilter), and combinations of low-pass and high-pass filters applied to SD and PFD branches. Our Low-High configuration, which applies a low-pass filter to SD and a high-pass filter to PFD, achieves the best overall balance, demonstrating its effectiveness as the optimal merging strategy.
179
+
180
+ 6 Conclusions
181
+ -------------
182
+
183
+ In this work, we propose a training&tuning-free framework that combines the strengths of Personalized Face Diffusion (PFD) models and StableDiffusion (SD) models to generate personalized images featuring realistic and complex human-object interactions (HOI). By introducing Cross-Attention Constraint, Latent Merge, and Residual Merge, our approach achieves seamless integration of identity-specific facial features from PFD with contextual interaction details from SD, ensuring both spatial alignment and contextual coherence. The flexibility of our framework allows adaptation to various PFD and SD architectures, demonstrating its robustness in handling diverse interaction scenarios. Without requiring additional fine-tuning or task-specific datasets, our method significantly improves identity retention and interaction realism compared to state-of-the-art approaches. This highlights the potential of leveraging pre-trained models to enable high-quality, personalized content generation across diverse applications.
184
+
185
+ References
186
+ ----------
187
+
188
+ * Cao et al. [2018] Qiong Cao, Li Shen, Weidi Xie, Omkar M Parkhi, and Andrew Zisserman. Vggface2: A dataset for recognising faces across pose and age. In _2018 13th IEEE international conference on automatic face & gesture recognition (FG 2018)_, pages 67–74. IEEE, 2018.
189
+ * Cao et al. [2019] Z. Cao, G. Hidalgo Martinez, T. Simon, S. Wei, and Y.A. Sheikh. OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields. _IEEE Transactions on Pattern Analysis and Machine Intelligence_, 2019.
190
+ * Chen et al. [2023a] Li Chen, Mengyi Zhao, Yiheng Liu, Mingxu Ding, Yangyang Song, Shizun Wang, Xu Wang, Hao Yang, Jing Liu, Kang Du, et al. Photoverse: Tuning-free image customization with text-to-image diffusion models, 2023a.
191
+ * Chen et al. [2023b] Zhuowei Chen, Shancheng Fang, Wei Liu, Qian He, Mengqi Huang, Yongdong Zhang, and Zhendong Mao. Dreamidentity: Improved editability for efficient face-identity preserved image generation. _arXiv preprint arXiv:2307.00300_, 2023b.
192
+ * Frederic Z.Zhang and Gould [2021] Dylan Campbell Frederic Z.Zhang and Stephen Gould. Spatially Conditioned Graphs for Detecting Human–Object Interactions. In _Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)_, pages 13319–13327, 2021.
193
+ * Gal et al. [2022] Rinon Gal, Yuval Alaluf, Yuval Atzmon, Or Patashnik, Amit H Bermano, Gal Chechik, and Daniel Cohen-Or. An image is worth one word: Personalizing text-to-image generation using textual inversion. _arXiv preprint arXiv:2208.01618_, 2022.
194
+ * Gao et al. [2020] Chen Gao, Si Liu, Defa Zhu, Quan Liu, Jie Cao, Haoqian He, Ran He, and Shuicheng Yan. InteractGAN: Learning to Generate Human-Object Interaction. In _Association for Computing Machinery_, 2020.
195
+ * Gupta and Malik [2015] Saurabh Gupta and Jitendra Malik. Visual Semantic Role Labeling. _arXiv preprint arXiv:1505.04474_, 2015.
196
+ * Hoe et al. [2024] Jiun Tian Hoe, Xudong Jiang, Chee Seng Chan, Yap-Peng Tan, and Weipeng Hu. InteractDiffusion: Interaction Control in Text-to-Image Diffusion Models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, 2024.
197
+ * Huang et al. [2023] Siteng Huang, Biao Gong, Yutong Feng, Xi Chen, Yuqian Fu, Yu Liu, and Donglin Wang. Learning disentangled identifiers for action-customized text-to-image generation. _arXiv preprint arXiv:2311.15841_, 2023.
198
+ * Hyung et al. [2023] Junha Hyung, Jaeyo Shin, and Jaegul Choo. Magicapture: High-resolution multi-concept portrait customization. _arXiv preprint arXiv:2309.06895_, 2023.
199
+ * Jiang et al. [2023] Zeyinzi Jiang, Chaojie Mao, Yulin Pan, Zhen Han, and Jingfeng Zhang. SCEdit: Efficient and Controllable Image Diffusion Generation via Skip Connection Editing. _arXiv preprint arXiv:2312.11392_, 2023.
200
+ * Jiang-Lin et al. [2024] Jian-Yu Jiang-Lin, Kang-Yang Huang, Ling Lo, Yi-Ning Huang, Terence Lin, Jhih-Ciang Wu, Hong-Han Shuai, and Wen-Huang Cheng. ReCorD: Reasoning and Correcting Diffusion for HOI Generation. In _Proceedings of the ACM International Conference on Multimedia (ACM MM)_, 2024.
201
+ * Karras et al. [2019] Tero Karras, Samuli Laine, and Timo Aila. A style-based generator architecture for generative adversarial networks. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, 2019.
202
+ * Li et al. [2023a] Xiaoming Li, Xinyu Hou, and Chen Change Loy. When stylegan meets stable diffusion: a w+ adapter for personalized image generation. _arXiv preprint arXiv:2311.17461_, 2023a.
203
+ * Li et al. [2023b] Yuheng Li, Haotian Liu, Qingyang Wu, Fangzhou Mu, Jianwei Yang, Jianfeng Gao, Chunyuan Li, and Yong Jae Lee. GLIGEN: Open-Set Grounded Text-to-Image Generation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, 2023b.
204
+ * Li et al. [2024] Zhen Li, Mingdeng Cao, Xintao Wang, Zhongang Qi, Ming-Ming Cheng, and Ying Shan. PhotoMaker: Customizing Realistic Human Photos via Stacked ID Embedding. In _IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, 2024.
205
+ * Liu et al. [2015] Ziwei Liu, Ping Luo, Xiaogang Wang, and Xiaoou Tang. Deep Learning Face Attributes in the Wild. In _Proceedings of International Conference on Computer Vision (ICCV)_, 2015.
206
+ * Nagrani et al. [2017] A. Nagrani, J.S. Chung, and A. Zisserman. Voxceleb: a large-scale speaker identification dataset. In _INTERSPEECH_, 2017.
207
+ * Peng et al. [2023] Xu Peng, Junwei Zhu, Boyuan Jiang, Ying Tai, Donghao Luo, Jiangning Zhang, Wei Lin, Taisong Jin, Chengjie Wang, and Rongrong Ji. PortraitBooth: A Versatile Portrait Model for Fast Identity-preserved Personalization. _arXiv preprint arXiv:2312.06354_, 2023.
208
+ * Podell et al. [2024] Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann, Tim Dockhorn, Jonas Müller, Joe Penna, and Robin Rombach. SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis. In _Proceedings of the International Conference on Learning Representations (ICLR)_, 2024.
209
+ * Radford et al. [2021] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. Learning Transferable Visual Models From Natural Language Supervision. In _Proceedings of the International Conference on Machine Learning (ICML)_, 2021.
210
+ * Riza Alp Güler [2018] Iasonas Kokkinos Riza Alp Güler, Natalia Neverova. DensePose: Dense Human Pose Estimation In The Wild. In _The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, 2018.
211
+ * Rombach et al. [2022] Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. High-Resolution Image Synthesis with Latent Diffusion Models. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, 2022.
212
+ * Ruiz et al. [2023a] Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch, Michael Rubinstein, and Kfir Aberman. Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition_, pages 22500–22510, 2023a.
213
+ * Ruiz et al. [2023b] Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Wei Wei, Tingbo Hou, Yael Pritch, Neal Wadhwa, Michael Rubinstein, and Kfir Aberman. Hyperdreambooth: Hypernetworks for fast personalization of text-to-image models. _arXiv preprint arXiv:2307.06949_, 2023b.
214
+ * Schroff et al. [2015] Florian Schroff, Dmitry Kalenichenko, and James Philbin. FaceNet: A Unified Embedding for Face Recognition and Clustering. In _Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, 2015.
215
+ * Schuhmann et al. [2022] Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, et al. Laion-5B: An open large-scale dataset for training next generation image-text models. _Advances in Neural Information Processing Systems_, 35:25278–25294, 2022.
216
+ * Shi et al. [2023] Jing Shi, Wei Xiong, Zhe Lin, and Hyun Joon Jung. Instantbooth: Personalized text-to-image generation without test-time finetuning. _arXiv preprint arXiv:2304.03411_, 2023.
217
+ * Shiohara and Yamasaki [2024] Kaede Shiohara and Toshihiko Yamasaki. Face2diffusion for fast and editable face personalization. _arXiv preprint arXiv:2403.05094_, 2024.
218
+ * Si et al. [2024] Chenyang Si, Ziqi Huang, Yuming Jiang, and Ziwei Liu. FreeU: Free Lunch in Diffusion U-Net. In _CVPR_, 2024.
219
+ * Su et al. [2023] Yu-Chuan Su, Kelvin CK Chan, Yandong Li, Yang Zhao, Han Zhang, Boqing Gong, Huisheng Wang, and Xuhui Jia. Identity encoder for personalized diffusion. _arXiv preprint arXiv:2304.07429_, 2023.
220
+ * Valevski et al. [2023] Dani Valevski, Danny Lumen, Yossi Matias, and Yaniv Leviathan. Face0: Instantaneously conditioning a text-to-image model on a face. In _SIGGRAPH Asia 2023 Conference Papers_, pages 1–10, 2023.
221
+ * Xiao et al. [2024] Guangxuan Xiao, Tianwei Yin, William T Freeman, Frédo Durand, and Song Han. FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention. In _International Journal of Computer Vision_, 2024.
222
+ * Xu et al. [2023] Jiazheng Xu, Xiao Liu, Yuchen Wu, Yuxuan Tong, Qinkai Li, Ming Ding, Jie Tang, and Yuxiao Dong. ImageReward: learning and evaluating human preferences for text-to-image generation. In _Proceedings of the 37th International Conference on Neural Information Processing Systems_, 2023.
223
+ * Yan et al. [2023] Yuxuan Yan, Chi Zhang, Rui Wang, Yichao Zhou, Gege Zhang, Pei Cheng, Gang Yu, and Bin Fu. Facestudio: Put your face everywhere in seconds. _arXiv preprint arXiv:2312.02663_, 2023.
224
+ * Ye et al. [2023] Hu Ye, Jun Zhang, Sibo Liu, Xiao Han, and Wei Yang. Ip-Adapter: Text compatible image prompt adapter for text-to-image diffusion models. _arXiv preprint arXiv:2308.06721_, 2023.
225
+ * Yuan et al. [2022] Hangjie Yuan, Jianwen Jiang, Samuel Albanie, Tao Feng, Ziyuan Huang, Dong Ni, and Mingqian Tang. RLIP: Relational Language-Image Pre-training for Human-Object Interaction Detection. In _Advances in Neural Information Processing Systems (NeurIPS)_, 2022.
226
+ * Yuan et al. [2023] Hangjie Yuan, Shiwei Zhang, Xiang Wang, Samuel Albanie, Yining Pan, Tao Feng, Jianwen Jiang, Dong Ni, Yingya Zhang, and Deli Zhao. RLIPv2: Fast Scaling of Relational Language-Image Pre-training. In _Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)_, 2023.
227
+ * Zhang et al. [2022] Frederic Z. Zhang, Dylan Campbell, and Stephen Gould. Efficient Two-Stage Detection of Human-Object Interactions with a Novel Unary-Pairwise Transformer. In _Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)_, pages 20104–20112, 2022.
228
+ * Zhang et al. [2016] Kaipeng Zhang, Zhanpeng Zhang, Zhifeng Li, and Yu Qiao. Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks. In _Proceedings of the IEEE International Conference on Computer Vision (ICCV)_, 2016.
229
+ * Zhang et al. [2023a] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding Conditional Control to Text-to-Image Diffusion Models. In _IEEE International Conference on Computer Vision (ICCV)_, 2023a.
230
+ * Zhang et al. [2024] Xulu Zhang, Xiao-Yong Wei, Wengyu Zhang, Jinlin Wu, Zhaoxiang Zhang, Zhen Lei, and Qing Li. A Survey on Personalized Content Synthesis with Diffusion Models. _arXiv preprint arXiv:2405.05538_, 2024. [https://arxiv.org/abs/2405.05538](https://arxiv.org/abs/2405.05538).
231
+ * Zhang et al. [2023b] Yuxin Zhang, Fan Tang, Nisha Huang, Haibin Huang, Chongyang Ma, Weiming Dong, and Changsheng Xu. MotionCrafter: One-Shot Motion Customization of Diffusion Models. _arXiv preprint arXiv:2312.05288_, 2023b.
232
+
233
+ \thetitle
234
+
235
+ Supplementary Material
236
+
237
+ In this supplementary material, we provide additional details and results to complement the main paper. Specifically, we include: the integration of ControlNet into the PersonaHOI framework for enhanced pose control in HOI generation (Section[7](https://arxiv.org/html/2501.05823v1#S7 "7 Combine PersonaHOI with ControlNet ‣ PersonaHOI: Effortlessly Improving Personalized Face with Human-Object Interaction Generation")); extended results demonstrating the effectiveness of our method on General Personalized Face Generation tasks with diverse prompts (Section[8](https://arxiv.org/html/2501.05823v1#S8 "8 More Results on General PFG ‣ PersonaHOI: Effortlessly Improving Personalized Face with Human-Object Interaction Generation")); visualizations combining general personalization with HOI across different scenarios like Style, Context, and Accessory (Section[9](https://arxiv.org/html/2501.05823v1#S9 "9 Visualization on General PFG + HOI ‣ PersonaHOI: Effortlessly Improving Personalized Face with Human-Object Interaction Generation")); a detailed comparison of image quality metrics such as FID, ImageReward, and Aesthetic Score (Section[10](https://arxiv.org/html/2501.05823v1#S10 "10 Comparison of Image Quality ‣ PersonaHOI: Effortlessly Improving Personalized Face with Human-Object Interaction Generation")); comprehensive ablation studies exploring the impact of Gaussian kernel strategies, identity injection timesteps, and filter configurations (Section[11](https://arxiv.org/html/2501.05823v1#S11 "11 Additional Ablation Studies ‣ PersonaHOI: Effortlessly Improving Personalized Face with Human-Object Interaction Generation")); and implementation details outlining the models and prompts used in our experiments (Section[12](https://arxiv.org/html/2501.05823v1#S12 "12 Implementation Details ‣ PersonaHOI: Effortlessly Improving Personalized Face with Human-Object Interaction Generation")).
238
+
239
+ 7 Combine PersonaHOI with ControlNet
240
+ ------------------------------------
241
+
242
+ In this section, we incorporate ControlNet[[42](https://arxiv.org/html/2501.05823v1#bib.bib42)] into our framework to improve human-object interactions by enabling precise pose control. Using pose information as an additional input, ControlNet enables enhanced customization for HOI content generation, offering greater flexibility for handling complex scenarios.
243
+
244
+ Framework Modification. We integrate ControlNet[[42](https://arxiv.org/html/2501.05823v1#bib.bib42)] into our framework by replacing the StableDiffusion(SD)[[24](https://arxiv.org/html/2501.05823v1#bib.bib24)] branch with a ControlNet model. Human Pose images from the V-COCO dataset[[8](https://arxiv.org/html/2501.05823v1#bib.bib8)] are used as inputs, providing explicit pose constraints for image generation. Leveraging our scalable architecture, we combine the personalized face generation model, FastComposer[[34](https://arxiv.org/html/2501.05823v1#bib.bib34)], with the ControlNet branch using the proposed Cross-Attention Constraint, Latent Fusion, and Residual Fusion strategies. Notably, this integration is training-free and requires no test-time tuning, ensuring efficient incorporation of pose-specific controls while preserving identity-specific facial features.
245
+
246
+ ![Image 8: Refer to caption](https://arxiv.org/html/2501.05823v1/x8.png)
247
+
248
+ Figure 8: Examples of integrating ControlNet[[42](https://arxiv.org/html/2501.05823v1#bib.bib42)] into the baseline FastComposer within our PersonaHOI framework.
249
+
250
+ Visualization. Figure[8](https://arxiv.org/html/2501.05823v1#S7.F8 "Figure 8 ‣ 7 Combine PersonaHOI with ControlNet ‣ PersonaHOI: Effortlessly Improving Personalized Face with Human-Object Interaction Generation") showcases examples of integrating ControlNet into our framework. By applying the same pose control to different subjects, our method effectively generates distinct identities with the specified pose, as well as faithfully depicts the human-object interaction described in the given text prompt. The generated human poses align closely with the provided poses, including aspects such as arm positioning, leg placement, and overall body orientation. Furthermore, the results demonstrate high fidelity in preserving facial identity, underscoring the effectiveness of our approach in achieving both pose accuracy and identity consistency across varied scenarios.
251
+
252
+ This experiment highlights the generalizability and flexibility of our framework. By incorporating ControlNet as an alternative branch, our method achieves fine-grained pose control in personalized face generation, making it adaptable to more complex and detailed HOI scenarios. This integration not only enhances the realism and coherence of the generated content but also broadens the applicability of our approach, particularly in domains like virtual reality, gaming, and digital content creation.
253
+
254
+ 8 More Results on General PFG
255
+ -----------------------------
256
+
257
+ As detailed in Section 5 of the main text, we evaluate our method on the General Personalized Face Generation (General PFG) task using 40 test prompts from FastComposer[[34](https://arxiv.org/html/2501.05823v1#bib.bib34)]. These prompts encompass a variety of scenarios, including Style, Accessory, Context, and Action, enabling a comprehensive assessment of our model’s adaptability across diverse conditions.
258
+
259
+ ### 8.1 More Quantitative Results
260
+
261
+ Table[4](https://arxiv.org/html/2501.05823v1#S8.T4 "Table 4 ‣ 8.1 More Quantitative Results ‣ 8 More Results on General PFG ‣ PersonaHOI: Effortlessly Improving Personalized Face with Human-Object Interaction Generation") presents a comparison of our PersonaHOI-enhanced methods with baseline approaches (FastComposer[[34](https://arxiv.org/html/2501.05823v1#bib.bib34)], IP-Adapter[[37](https://arxiv.org/html/2501.05823v1#bib.bib37)], PhotoMaker[[17](https://arxiv.org/html/2501.05823v1#bib.bib17)]) on the General PFG task. Our methods consistently deliver balanced performance across Identity Preservation and Prompt Consistency, unlike baseline models, which often favor one metric at the expense of the other. Notably, IP-Adapter achieves the highest Identity Preservation scores but struggles with Prompt Consistency, especially in the Style category, where its score drops to just 18.25%. On the other hand, PhotoMaker excels in Prompt Consistency; however, it suffers from the lowest Identity Preservation score among all baselines (45.31%). In contrast, PersonaHOI achieves a strong balance by consistently ranking among the top two in most metrics. This underscores our capability to preserve identity while adhering to diverse text prompts effectively. Furthermore, our efficient training-free design enhances its practicality, making it adaptable to a wide range of scenarios.
262
+
263
+ Table 4: Comparison of Our Method with FastComposer[[34](https://arxiv.org/html/2501.05823v1#bib.bib34)] on General Personalized Face Generation. We compare across four categories of text prompts including Accessory, Style, Action, and Context, following[[34](https://arxiv.org/html/2501.05823v1#bib.bib34), [20](https://arxiv.org/html/2501.05823v1#bib.bib20)]. Results are formatted as “ Identity Preservation (%) / Prompt Consistency (%)”. The best-performing results for each metric are highlighted in bold, while the second-best results are underlined.
264
+
265
+ ### 8.2 Visualization
266
+
267
+ ![Image 9: Refer to caption](https://arxiv.org/html/2501.05823v1/x9.png)
268
+
269
+ Figure 9: Visualization comparison of our method with baseline approaches (FastComposer[[34](https://arxiv.org/html/2501.05823v1#bib.bib34)], IP-Adapter[[37](https://arxiv.org/html/2501.05823v1#bib.bib37)], PhotoMaker[[17](https://arxiv.org/html/2501.05823v1#bib.bib17)]) across four categories of general personalized face generation: Style, Accessory, Context, and Action.
270
+
271
+ ![Image 10: Refer to caption](https://arxiv.org/html/2501.05823v1/x10.png)
272
+
273
+ Figure 10: Visualization of our PersonaHOI-enhanced PhotoMaker[[17](https://arxiv.org/html/2501.05823v1#bib.bib17)] compared to the baseline. From left to right, the prompts are: “a Japanese woodblock print of a woman”, “a woman wearing a Santa hat”, “a woman on top of a wooden floor”, “a woman walking a dog,” and “a woman cooking a meal.”
274
+
275
+ Figure[9](https://arxiv.org/html/2501.05823v1#S8.F9 "Figure 9 ‣ 8.2 Visualization ‣ 8 More Results on General PFG ‣ PersonaHOI: Effortlessly Improving Personalized Face with Human-Object Interaction Generation") illustrates challenging examples from four categories: Style, Accessory, Context, and Action, showcasing the comparison between our method and the baselines (FastComposer[[34](https://arxiv.org/html/2501.05823v1#bib.bib34)], IP-Adapter[[37](https://arxiv.org/html/2501.05823v1#bib.bib37)], PhotoMaker[[17](https://arxiv.org/html/2501.05823v1#bib.bib17)]). Our method shows significant improvements, achieving a strong balance between face personalization and prompt adherence. In the first row (Style), our approach accurately applies the specified stylization while maintaining the subject’s identity, delivering outputs that are coherent and identity-consistent, surpassing the baselines. In the second row (Accessory), featuring “a man wearing pink glasses”, our method faithfully generates the pink glasses specified in the prompt. By contrast, FastComposer and IP-Adapter misinterpret the prompt, producing outputs with pink clothing or backgrounds instead, illustrating the challenges of precise accessory generation. In the third row (Context), depicting “a woman on top of a purple rug in a forest”, our method effectively captures the purple rug and forest background while preserving facial details, whereas the baselines fail to maintain scene coherence or facial fidelity. In the fourth row (Action), with the prompt “a woman riding a horse”, our method captures both the riding action and the subject’s facial features, producing realistic and cohesive results. In contrast, the baseline methods struggle with achieving realistic actions or maintaining identity consistency.
276
+
277
+ ![Image 11: Refer to caption](https://arxiv.org/html/2501.05823v1/x11.png)
278
+
279
+ Figure 11: Examples of integrating general personalization with HOI across diverse scenarios. From top to bottom, the rows illustrate Style+HOI, Context+HOI, and Accessory+HOI.
280
+
281
+ Figure[10](https://arxiv.org/html/2501.05823v1#S8.F10 "Figure 10 ‣ 8.2 Visualization ‣ 8 More Results on General PFG ‣ PersonaHOI: Effortlessly Improving Personalized Face with Human-Object Interaction Generation") presents additional comparisons leveraging the high-quality SD-XL-based PhotoMaker[[17](https://arxiv.org/html/2501.05823v1#bib.bib17)]. By incorporating PersonaHOI, PhotoMaker demonstrates significant improvements in adhering to text prompts and preserving facial features. For instance, given the prompt “a woman on top of a wooden floor”, baseline results frequently display distorted facial features and unnatural human poses. In contrast, our method effectively preserves the subject’s identity and accurately adheres to the given prompt. These findings underscore the robustness of our approach to maintaining identity personalization while achieving prompt fidelity.
282
+
283
+ Overall, these results highlight the flexibility and effectiveness of PersonaHOI in handling diverse and complex personalized face-generation tasks. By enhancing existing personalized face generation models, our approach integrates text prompt alignment and identity preservation, offering a versatile solution for advancing general PFG capabilities.
284
+
285
+ 9 Visualization on General PFG + HOI
286
+ ------------------------------------
287
+
288
+ In this section, we provide additional examples of personalized face generation combining Human-Object Interaction (HOI) with general modifications, complementing Figure 1 from the main text. We focus on scenarios that combine Context+HOI, Style+HOI, and Accessory+HOI, as illustrated in Figure[11](https://arxiv.org/html/2501.05823v1#S8.F11 "Figure 11 ‣ 8.2 Visualization ‣ 8 More Results on General PFG ‣ PersonaHOI: Effortlessly Improving Personalized Face with Human-Object Interaction Generation").
289
+
290
+ The results highlight our method’s ability to integrate identity preservation with both HOI-specific and general prompt elements in personalized face generation. Unlike baseline models, which struggle to balance these tasks, our approach excels in producing coherent and contextually accurate outputs. In the first row (Style+HOI), FastComposer[[34](https://arxiv.org/html/2501.05823v1#bib.bib34)] and PhotoMaker[[17](https://arxiv.org/html/2501.05823v1#bib.bib17)] fail to generate the bench properly, and IP-Adapter[[37](https://arxiv.org/html/2501.05823v1#bib.bib37)] neglects the stylization requirements, resulting in outputs that lack the desired artistic effect. In the second row (Context+HOI), all baseline methods struggle with the natural placement of the umbrella, creating awkward and unrealistic interactions. In the third row (Accessory+HOI), baseline methods either omit or generate incomplete frisbee, while our approach captures both the accessory and the interaction comprehensively.
291
+
292
+ These results highlight the robustness and adaptability of our method in addressing intricate prompts that combine general personalization with realistic human-object interactions. By excelling in both identity preservation and contextual fidelity, our approach offers a unified and effective solution for personalized face generation across diverse and complex scenarios.
293
+
294
+ 10 Comparison of Image Quality
295
+ ------------------------------
296
+
297
+ We evaluate the image quality of our method compared to baseline approaches on the task of Personalized Face with HOI Generation. The FID metric, calculated on the V-COCO[[8](https://arxiv.org/html/2501.05823v1#bib.bib8)] test set, quantifies the similarity between the distribution of generated images and that of realistic ones. To further assess image quality, we use ImageReward[[35](https://arxiv.org/html/2501.05823v1#bib.bib35)] and Aesthetic Score[[28](https://arxiv.org/html/2501.05823v1#bib.bib28)], which evaluate human preference alignment and visual appeal, respectively. As shown in Table[5](https://arxiv.org/html/2501.05823v1#S10.T5 "Table 5 ‣ 10 Comparison of Image Quality ‣ PersonaHOI: Effortlessly Improving Personalized Face with Human-Object Interaction Generation"), our method consistently outperforms baselines in both ImageReward and FID, highlighting its capacity to generate high-quality images that align closely with real-world distributions and human preferences. For Aesthetic Score, our approach significantly enhances the results for FastComposer[[34](https://arxiv.org/html/2501.05823v1#bib.bib34)] and IP-Adapter[[37](https://arxiv.org/html/2501.05823v1#bib.bib37)], emphasizing its effectiveness in improving visual quality. Although a slight decrease is observed for PhotoMaker[[17](https://arxiv.org/html/2501.05823v1#bib.bib17)], our method still maintains competitive performance. Overall, these results confirm the capability of our training-free framework to generate identity-preserving, interaction-rich images that balance realism, human preference, and aesthetic quality.
298
+
299
+ Table 5: Comparison of image quality on the task of Personalized Face with HOI Generation. Metrics include FID (lower is better), ImageReward (higher is better), and Aesthetic Score (higher is better). We use (red) scripts to denote the performance improvement and (green) scripts for the decrease.
300
+
301
+ 11 Additional Ablation Studies
302
+ ------------------------------
303
+
304
+ ### 11.1 Ablation on Gaussian Kernels
305
+
306
+ We investigate the effect of Gaussian kernel sizes, controlled by the scaling factor α 𝛼\alpha italic_α, on Personalized Face with HOI Generation. The kernel size is determined from the head segmentation mask extracted from SD-generated images. Specifically, the area of the head mask is computed and then scaled by taking its square root to derive a base size. This base size is multiplied by α 𝛼\alpha italic_α, where larger α 𝛼\alpha italic_α values result in broader kernels, emphasizing global context, while smaller α 𝛼\alpha italic_α values produce more compact kernels, focusing on fine-grained facial details.
307
+
308
+ Table[6](https://arxiv.org/html/2501.05823v1#S11.T6 "Table 6 ‣ 11.1 Ablation on Gaussian Kernels ‣ 11 Additional Ablation Studies ‣ PersonaHOI: Effortlessly Improving Personalized Face with Human-Object Interaction Generation") illustrates that constant kernel sizes exhibit a trade-off between metrics. Larger kernels (e.g., α=3.5 𝛼 3.5\alpha=3.5 italic_α = 3.5) excel in Action Alignment (57.07%) by prioritizing interaction layouts but significantly compromise Identity Preservation (23.67%). Conversely, smaller kernels (e.g., α=0.5 𝛼 0.5\alpha=0.5 italic_α = 0.5) preserve identity better (51.58%) but perform worse in Action Alignment (54.46%). To address this, we implement dynamic kernel strategies that adapt over timesteps. The decremental kernel (2.5 →→\rightarrow→ 0.5) achieves the best overall performance, delivering the highest Identity Preservation (55.28%) and competitive Action Alignment (56.65%). In contrast, the incremental kernel (0.5 →→\rightarrow→ 2.5) underperforms across all metrics. These findings suggest that starting with larger kernels to capture global interaction layouts and progressively reducing them to refine facial details is the most effective approach. Consequently, we adopt the decremental kernel in all experiments.
309
+
310
+ Table 6: Ablation Study on Gaussian Kernel Size. We evaluate the impact of varying Gaussian kernel sizes with α 𝛼\alpha italic_α on the task of Personalized Face with HOI Generation. The best-performing results for each metric are highlighted in bold, while the second-best results are underlined.
311
+
312
+ ### 11.2 Ablation on Identity Injection Timestep
313
+
314
+ Table 7: Ablation Study on Identity Injection Timestep. We analyze the impact of injecting identity embeddings at different timesteps on the task of Personalized Face with HOI Generation. The experiments are conducted on FastComposer[[34](https://arxiv.org/html/2501.05823v1#bib.bib34)] with a total of 50 diffusion timesteps. The first row represents the baseline results from FastComposer without our method. The best-performing results for each metric are highlighted in bold, while the second-best results are underlined.
315
+
316
+ In the Introduction of the main paper, we discuss that existing methods[[34](https://arxiv.org/html/2501.05823v1#bib.bib34), [20](https://arxiv.org/html/2501.05823v1#bib.bib20), [17](https://arxiv.org/html/2501.05823v1#bib.bib17)] often adopt a delayed injection strategy, introducing identity embeddings at later diffusion timesteps to balance text alignment and identity preservation. This approach allows text embeddings to dominate early stages, enhancing prompt adherence before incorporating identity-specific details.
317
+
318
+ In contrast, our PersonaHOI framework integrates StableDiffusion (SD) from the beginning of the generation process, leveraging its robust text alignment capabilities. This enables immediate injection of identity embeddings at timestep 0, ensuring seamless integration of identity-specific details without compromising text alignment or interaction coherence. As shown in Table[7](https://arxiv.org/html/2501.05823v1#S11.T7 "Table 7 ‣ 11.2 Ablation on Identity Injection Timestep ‣ 11 Additional Ablation Studies ‣ PersonaHOI: Effortlessly Improving Personalized Face with Human-Object Interaction Generation"), our method achieves the highest Identity Preservation (55.28%) and Action Alignment (56.65%) while maintaining strong Prompt Consistency (23.16%). Delayed injection strategies, however, significantly diminish Identity Preservation (e.g., 6.28% at timestep 50) as the influence of identity information is reduced during denoising. These results confirm that PersonaHOI effectively combines identity preservation and text alignment, eliminating the limitations of delayed strategies and ensuring a balanced integration of identity and interaction realism throughout the generation process.
319
+
320
+ ### 11.3 Impact of High-Pass/Low-Pass Filters
321
+
322
+ ![Image 12: Refer to caption](https://arxiv.org/html/2501.05823v1/x12.png)
323
+
324
+ Figure 12: Visual comparison of different filter configurations in Residual Fusion. The configurations include fusion without filters (NoFilter) and different combinations of low-pass and high-pass filters (Low-Low, High-High, High-Low, and Low-High) applied to PFD and SD. Experiments are conducted with FastComposer as the backbone. Please zoom in on the images for a clearer comparison.
325
+
326
+ In the main text (Section 5), we evaluated the impact of various high-pass and low-pass filter configurations for Residual Fusion in integrating personalized face diffusion models (PFD) with StableDiffusion (SD). To validate these observations, Figure[12](https://arxiv.org/html/2501.05823v1#S11.F12 "Figure 12 ‣ 11.3 Impact of High-Pass/Low-Pass Filters ‣ 11 Additional Ablation Studies ‣ PersonaHOI: Effortlessly Improving Personalized Face with Human-Object Interaction Generation") presents visualizations of five configurations: fusion without filters (NoFilter) and combinations of low-pass and high-pass filters (Low-Low, High-High, High-Low, and Low-High) applied to PFD and SD.
327
+
328
+ The NoFilter configuration demonstrates strong initial results due to the inherent robustness of our Residual Fusion, Latent Fusion, and Cross-Attention Constraint. However, certain challenges persist. In the first and second rows, interactions involving the “cup” and “bench” appear distorted, leading to unnatural object dynamics and contextual layouts. Introducing high-pass and low-pass filters effectively mitigates these issues. Among the configurations, Low-High proves to be the most effective. It resolves contextual inconsistencies observed with NoFilter, producing realistic object placement (e.g., natural positioning of the cup and bench in the first and second rows). Furthermore, as illustrated in the third and fourth rows, Low-High enhances accessory placement (e.g., snow glasses) and preserves detailed facial textures, delivering sharper visuals and well-balanced lighting. By contrast, other configurations (High-High, High-Low, Low-Low) show inferior performance, failing to achieve the same balance between global scene coherence and fine-grained details.
329
+
330
+ Overall, while NoFilter establishes a robust baseline, the addition of high-pass and low-pass filters, particularly in the Low-High configuration, significantly enhances the fusion process. This approach effectively addresses limitations, delivering the most balanced and realistic results for personalized human-object interaction generation.
331
+
332
+ 12 Implementation Details
333
+ -------------------------
334
+
335
+ ### 12.1 Off-the-Shelf Models
336
+
337
+ We employ several off-the-shelf models in implementation to ensure robust personalized generation and evaluation. For diffusion methods, we adopt the original configurations from baseline methods: StableDiffusion v1.5 (SD v1.5)[[24](https://arxiv.org/html/2501.05823v1#bib.bib24)] for FastComposer[[34](https://arxiv.org/html/2501.05823v1#bib.bib34)]; advanced StableDiffusion XL (SD-XL)[[21](https://arxiv.org/html/2501.05823v1#bib.bib21)] for IP-Adapter[[37](https://arxiv.org/html/2501.05823v1#bib.bib37)] and PhotoMaker[[17](https://arxiv.org/html/2501.05823v1#bib.bib17)]. Corresponding SD models are incorporated into the PersonaHOI framework. For head mask segmentation, we use a pretrained DensePose[[23](https://arxiv.org/html/2501.05823v1#bib.bib23)] model (ResNet-50-FPN backbone), enabling precise extraction of head regions for fusion and attention constraints. To evaluate human-object interactions, we employ the pretrained UPT HOI detector[[40](https://arxiv.org/html/2501.05823v1#bib.bib40)] (ResNet-101-DC5 bakbone). For the combination of PersonaHOI and ControlNet[[42](https://arxiv.org/html/2501.05823v1#bib.bib42)], we utilize a pretrained SD v1.5-based ControlNet conditioned on human pose estimation. The pose control is extracted from V-COCO[[8](https://arxiv.org/html/2501.05823v1#bib.bib8)] dataset with Openpose[[2](https://arxiv.org/html/2501.05823v1#bib.bib2)] pose estimator.
338
+
339
+ ### 12.2 Text Prompts for Image Generation
340
+
341
+ Prompts for General Personalized Face Generation. Following previous works[[34](https://arxiv.org/html/2501.05823v1#bib.bib34), [20](https://arxiv.org/html/2501.05823v1#bib.bib20)], we utilized 40 prompts across four types:
342
+
343
+ * ∙∙\bullet∙Accessory:
344
+
345
+ “a man/woman wearing a red hat”,
346
+
347
+ “a man/woman wearing a Santa hat”,
348
+
349
+ “a man/woman wearing a rainbow scar“,
350
+
351
+ “a man/woman wearing a black top hat and a monocle”,
352
+
353
+ “a man/woman in a chef outfit”,
354
+
355
+ “a man/woman in a firefighter outfit”,
356
+
357
+ “a man/woman in a police outfit”,
358
+
359
+ “a man/woman wearing pink glasses”,
360
+
361
+ “a man/woman wearing a yellow shirt”,
362
+
363
+ “a man/woman in a purple wizard outfit”.
364
+ * ∙∙\bullet∙Style:
365
+
366
+ “a painting of a man/woman in the style of Banksy”,
367
+
368
+ “a painting of a man/woman in the style of Vincent Van Gogh”,
369
+
370
+ “a colorful graffiti painting of a man/woman”,
371
+
372
+ “a watercolor painting of a man/woman”,
373
+
374
+ “a Greek marble sculpture of a man/woman”,
375
+
376
+ “a street art mural of a man/woman”,
377
+
378
+ “a black and white photograph of a man/woman”,
379
+
380
+ “a pointillism painting of a man/woman”,
381
+
382
+ “a Japanese woodblock print of a man/woman”,
383
+
384
+ “a street art stencil of a man/woman”.
385
+ * ∙∙\bullet∙Context:
386
+
387
+ “a man/woman in the jungle”,
388
+
389
+ “a man/woman in the snow”,
390
+
391
+ “a man/woman on the beach”,
392
+
393
+ “a man/woman on a cobblestone street”,
394
+
395
+ “a man/woman on top of pink fabric”,
396
+
397
+ “a man/woman on top of a wooden floor”,
398
+
399
+ “a man/woman with a city in the background”,
400
+
401
+ “a man/woman with a mountain in the background”,
402
+
403
+ “a man/woman with a blue house in the background”,
404
+
405
+ “a man/woman on top of a purple rug in a forest”.
406
+ * ∙∙\bullet∙Action:
407
+
408
+ “a man/woman riding a horse”,
409
+
410
+ “a man/woman holding a glass of wine”,
411
+
412
+ “a man/woman holding a piece of cake”,
413
+
414
+ “a man/woman giving a lecture”,
415
+
416
+ “a man/woman reading a book”,
417
+
418
+ “a man/woman gardening in the backyard”,
419
+
420
+ “a man/woman cooking a meal”,
421
+
422
+ “a man/woman working out at the gym”,
423
+
424
+ “a man/woman walking the dog”,
425
+
426
+ “a man/woman baking cookies”.
427
+
428
+ Prompts for Personalized Face with HOI Generation. We select 30 human-object-interactions from V-COCO[[8](https://arxiv.org/html/2501.05823v1#bib.bib8)] dataset and format them as“ a man/woman” + “[verb]-ing”+ object name for personalized face with HOI generation:
429
+
430
+ “a man/woman surfing with a surfboard”,
431
+
432
+ “a man/woman skateboarding with a skateboard”,
433
+
434
+ “a man/woman jumping with a skateboard”,
435
+
436
+ “a man/woman snowboarding with a snowboard”,
437
+
438
+ “a man/woman sitting on a chair”,
439
+
440
+ “a man/woman skiing with skis”,
441
+
442
+ “a man/woman working on a laptop”,
443
+
444
+ “a man/woman catching a frisbee”,
445
+
446
+ “a man/woman carrying a suitcase”,
447
+
448
+ “a man/woman talking on a cell phone”,
449
+
450
+ “a man/woman hitting a sports ball”,
451
+
452
+ “a man/woman cutting a cake”,
453
+
454
+ “a man/woman riding a motorcycle”,
455
+
456
+ “a man/woman riding a horse”,
457
+
458
+ “a man/woman sitting on a bench”,
459
+
460
+ “a man/woman eating pizza”,
461
+
462
+ “a man/woman reading a book”,
463
+
464
+ “a man/woman holding a cat”,
465
+
466
+ “a man/woman drinking with a cup”,
467
+
468
+ “a man/woman holding a toothbrush”,
469
+
470
+ “a man/woman holding a teddy bear”,
471
+
472
+ “a man/woman looking at a tv”,
473
+
474
+ “a man/woman holding an umbrella”,
475
+
476
+ “a man/woman laying on a bed”,
477
+
478
+ “a man/woman looking at a dog”,
479
+
480
+ “a man/woman carrying a book”,
481
+
482
+ “a man/woman kicking a sports ball”,
483
+
484
+ “a man/woman throwing a frisbee”,
485
+
486
+ “a man/woman cutting with scissors”,
487
+
488
+ “a man/woman riding a car”.