mishig HF Staff commited on
Commit
dded540
Β·
verified Β·
1 Parent(s): 2ffdca5

Add 1 files

Browse files
Files changed (1) hide show
  1. 2506/2506.01935.md +424 -0
2506/2506.01935.md ADDED
@@ -0,0 +1,424 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Title: Low-Rank Head Avatar Personalization with Registers
2
+
3
+ URL Source: https://arxiv.org/html/2506.01935
4
+
5
+ Published Time: Tue, 03 Jun 2025 02:07:45 GMT
6
+
7
+ Markdown Content:
8
+ Sai Tanmay Reddy Chakkera
9
+
10
+ Department of Computer Science
11
+
12
+ Stony Brook University
13
+
14
+ schakkera@cs.stonybrook.edu
15
+
16
+ &Aggelina Chatziagapi
17
+
18
+ Department of Computer Science
19
+
20
+ Stony Brook University
21
+
22
+ echatziagapi@cs.stonybrook.edu
23
+
24
+ &Md Moniruzzaman
25
+
26
+ Atmanity Inc.
27
+
28
+ mman@atmanity.io
29
+
30
+ &Chen-Ping Yu
31
+
32
+ Atmanity Inc.
33
+
34
+ cpyu@atmanity.io
35
+
36
+ &Yi-Hsuan Tsai
37
+
38
+ Atmanity Inc.
39
+
40
+ yhtsai@atmanity.io
41
+
42
+ &Dimitris Samaras
43
+
44
+ Department of Computer Science
45
+
46
+ Stony Brook University
47
+
48
+ samaras@cs.stonybrook.edu
49
+
50
+ ###### Abstract
51
+
52
+ We introduce a novel method for low-rank personalization of a generic model for head avatar generation. Prior work proposes generic models that achieve high-quality face animation by leveraging large-scale datasets of multiple identities. However, such generic models usually fail to synthesize unique identity-specific details, since they learn a general domain prior. To adapt to specific subjects, we find that it is still challenging to capture high-frequency facial details via popular solutions like low-rank adaptation (LoRA). This motivates us to propose a specific architecture, a Register Module, that enhances the performance of LoRA, while requiring only a small number of parameters to adapt to an unseen identity. Our module is applied to intermediate features of a pre-trained model, storing and re-purposing information in a learnable 3D feature space. To demonstrate the efficacy of our personalization method, we collect a dataset of talking videos of individuals with distinctive facial details, such as wrinkles and tattoos. Our approach faithfully captures unseen faces, outperforming existing methods quantitatively and qualitatively. We will release the code, models, and dataset to the public. Project page: [https://starc52.github.io/publications/LoRAvatar/](https://starc52.github.io/publications/2025-05-28-LoRAvatar/).
53
+
54
+ 1 Introduction
55
+ --------------
56
+
57
+ Synthesizing photo-realistic human faces has long been a challenge for both computer vision and graphics. It has broad applications from AR/VR, virtual communication, and video games, to the movie industry and healthcare. Earlier approaches rely on 3D morphable models (3DMMs)([garrido2015vdub,](https://arxiv.org/html/2506.01935v1#bib.bib21); [garrido2014automatic,](https://arxiv.org/html/2506.01935v1#bib.bib20); [thies2016face2face,](https://arxiv.org/html/2506.01935v1#bib.bib49)), while subsequent methods turn to generative adversarial networks (GANs)([kim2018deepvideo,](https://arxiv.org/html/2506.01935v1#bib.bib28); [pumarola2020ganimation,](https://arxiv.org/html/2506.01935v1#bib.bib41); [wav2lip,](https://arxiv.org/html/2506.01935v1#bib.bib40); [vougioukas2020realistic,](https://arxiv.org/html/2506.01935v1#bib.bib52)). More recent works learn 3D neural representations of the human face, which rely on neural radiance fields (NeRFs)([pumarola2021d,](https://arxiv.org/html/2506.01935v1#bib.bib42); [nerfies,](https://arxiv.org/html/2506.01935v1#bib.bib38); [nerface,](https://arxiv.org/html/2506.01935v1#bib.bib18); [park2021hypernerf,](https://arxiv.org/html/2506.01935v1#bib.bib39)) or 3D Gaussian Splatting (3DGS)([kerbl20233d,](https://arxiv.org/html/2506.01935v1#bib.bib27); [cho2024gaussiantalker,](https://arxiv.org/html/2506.01935v1#bib.bib8); [qian2024gaussianavatars,](https://arxiv.org/html/2506.01935v1#bib.bib44); [xu2023gaussianheadavatar,](https://arxiv.org/html/2506.01935v1#bib.bib58)). While these approaches lead to high-quality results, they usually require identity-specific training and are not able to generalize. Only a few recent methods propose generic models, e.g., GAGAvatar([chu2024gagavatar,](https://arxiv.org/html/2506.01935v1#bib.bib9)), which preserve the high-quality rendering of 3DGS, while trained on a large-scale dataset of multiple identities, enabling generalization to unseen human faces.
58
+
59
+ However, such generic models usually fail to produce key identity-specific facial details, since they learn a general domain prior. To produce distinctive details, prior work proposes adapting a pre-trained model to a specific identity, e.g.through fine-tuning or meta-learning([nitzan2022mystyle,](https://arxiv.org/html/2506.01935v1#bib.bib36); [zhang2023metaportrait,](https://arxiv.org/html/2506.01935v1#bib.bib60); [saunders2024talklora,](https://arxiv.org/html/2506.01935v1#bib.bib46)). Low-rank adaptation (LoRA)([hu2022lora,](https://arxiv.org/html/2506.01935v1#bib.bib26)) has been first proposed for large language models (LLMs). It injects trainable rank decomposition matrices into each layer of a pre-trained model, leading to a significant decrease of the learnable parameters and on-par performance compared to fine-tuning the entire model.
60
+
61
+ In this work, we address the problem of adaptation, also called personalization, to a specific identity, which is not seen in the initial training of a generic model for head avatar generation. Due to its efficiency and popularity in other fields, we start with LoRA, by learning low-rank decomposition matrices for specific layers. We notice that LoRA is not sufficient to synthesize high-frequency facial characteristics (see Figure[1](https://arxiv.org/html/2506.01935v1#S1.F1 "Figure 1 β€£ 1 Introduction β€£ Low-Rank Head Avatar Personalization with Registers")). Inspired by[darcet2023vision](https://arxiv.org/html/2506.01935v1#bib.bib11) that learn additional tokens (registers) in order to store global information for a transformer network, we propose a specific module that extends the idea of registers to 3D registers for human faces. To the best of our knowledge, we believe that this is the first method to extend registers to 3D representations.
62
+
63
+ More specifically, we design a Register Module that learns a 3D feature space that stores and repurposes information for a human face during training. Similar to registers in ViT([darcet2023vision,](https://arxiv.org/html/2506.01935v1#bib.bib11)) that store global information of an image, our Register Module stores the distinctive details of an identity, given different views. We apply our Register Module to intermediate features that are extracted from a pre-trained DINOv2 model([oquab2023dinov2,](https://arxiv.org/html/2506.01935v1#bib.bib37)). While our proposed module can be applied to any network that uses DINOv2 features, we focus our study on GAGAvatar([chu2024gagavatar,](https://arxiv.org/html/2506.01935v1#bib.bib9)) as our generic pre-trained model. To evaluate the efficacy of our low-rank personalization, we collect a dataset of talking videos of individuals with rare high-frequency facial details, such as wrinkles and tattoos, that are not included in existing datasets. Our method outperforms state-of-the-art approaches, like meta-learning and vanilla LoRA, both quantitatively and qualitatively, while it only requires a small number of parameters to adapt.
64
+
65
+ In brief, our main contributions are as follows:
66
+
67
+ * β€’We propose a novel method for low-rank personalization of a generic model for head avatar generation, that captures identity-specific facial details.
68
+ * β€’We design a Register Module that stores and repurposes information for an identity in a learnable 3D feature space, extending the idea of registers for ViTs to 3D human faces.
69
+ * β€’We collect a dataset, namely RareFace-50, of talking videos of individuals with distinctive facial characteristics, e.g.wrinkles and tattoos, that are challenging to synthesize with generic models, and thus demonstrating the need for our method.
70
+
71
+ ![Image 1: Refer to caption](https://arxiv.org/html/2506.01935v1/extracted/6493591/imgs/teaser_2x5_revised_video_beaut.png)
72
+
73
+ Figure 1: Our method personalizes and adapts a generic head avatar model using LoRA, while preserving high-frequency identity-specific facial details using our Register Module and retaining the original inference speed. Note that the small image in the bottom-right corner is the driving image.
74
+
75
+ 2 Related Work
76
+ --------------
77
+
78
+ Human Portrait Synthesis. Earlier approaches for video synthesis of human faces are based on 3DMMs([garrido2015vdub,](https://arxiv.org/html/2506.01935v1#bib.bib21); [garrido2014automatic,](https://arxiv.org/html/2506.01935v1#bib.bib20); [thies2016face2face,](https://arxiv.org/html/2506.01935v1#bib.bib49)). A 3DMM([blanz1999morphable,](https://arxiv.org/html/2506.01935v1#bib.bib2)) is a parametric model that represents a face as a linear combination of the principal axes of shape, texture, and expression, learned by principal component analysis (PCA). Subsequent works propose GAN-based networks for video synthesis([kim2018deepvideo,](https://arxiv.org/html/2506.01935v1#bib.bib28); [fomm,](https://arxiv.org/html/2506.01935v1#bib.bib47); [pumarola2020ganimation,](https://arxiv.org/html/2506.01935v1#bib.bib41)) and audio-driven talking faces([wav2lip,](https://arxiv.org/html/2506.01935v1#bib.bib40); [pcavs,](https://arxiv.org/html/2506.01935v1#bib.bib63); [vougioukas2020realistic,](https://arxiv.org/html/2506.01935v1#bib.bib52); [xu2024emotion,](https://arxiv.org/html/2506.01935v1#bib.bib56)). GANs are usually trained on large datasets of 2D videos of multiple identities, but they cannot model the 3D face geometry. More recent works learn 3D neural representations of the human face, which rely on neural radiance fields (NeRFs)([mildenhall2020nerf,](https://arxiv.org/html/2506.01935v1#bib.bib34)) or 3D Gaussian Splatting (3DGS)([kerbl20233d,](https://arxiv.org/html/2506.01935v1#bib.bib27)). Diffusion models have also become popular but they only produce 2D videos([xu2024vasa,](https://arxiv.org/html/2506.01935v1#bib.bib57)) or are identity-specific([kirschstein2024diffusionavatars,](https://arxiv.org/html/2506.01935v1#bib.bib30)). In this paper, we explore personalization to capture identity-specific facial details by adapting a generic avatar model.
79
+
80
+ Animatable 3D Head Avatars. NeRFs have been first proposed for novel-view synthesis of static scenes([mildenhall2020nerf,](https://arxiv.org/html/2506.01935v1#bib.bib34)). They have been extended to dynamic scenes and human faces([pumarola2021d,](https://arxiv.org/html/2506.01935v1#bib.bib42); [nerfies,](https://arxiv.org/html/2506.01935v1#bib.bib38); [nerface,](https://arxiv.org/html/2506.01935v1#bib.bib18); [park2021hypernerf,](https://arxiv.org/html/2506.01935v1#bib.bib39); [chakkera2024jean,](https://arxiv.org/html/2506.01935v1#bib.bib4)). They usually represent a human face by sampling 3D points in a canonical space, which can be conditioned on 3DMM expression parameters to enable animation. Although they produce high-quality reconstructions, they require expensive identity-specific training. Subsequent works([zielonka2023insta,](https://arxiv.org/html/2506.01935v1#bib.bib65); [bakedavatar,](https://arxiv.org/html/2506.01935v1#bib.bib16)) propose techniques to reduce the training and inference time. 3DGS([kerbl20233d,](https://arxiv.org/html/2506.01935v1#bib.bib27)) became very popular as it achieves real-time rendering with high visual quality, by representing complex scenes with 3D Gaussians. It has recently been applied for dynamic human avatars([cho2024gaussiantalker,](https://arxiv.org/html/2506.01935v1#bib.bib8); [qian2024gaussianavatars,](https://arxiv.org/html/2506.01935v1#bib.bib44); [xu2023gaussianheadavatar,](https://arxiv.org/html/2506.01935v1#bib.bib58); [dhamo2024headgas,](https://arxiv.org/html/2506.01935v1#bib.bib14); [wang2025gaussianhead,](https://arxiv.org/html/2506.01935v1#bib.bib53)). However, most approaches learn identity-specific models. Very few recent works propose generic models([chu2024gagavatar,](https://arxiv.org/html/2506.01935v1#bib.bib9); [chu2024gpavatar,](https://arxiv.org/html/2506.01935v1#bib.bib10); [kirschstein2024gghead,](https://arxiv.org/html/2506.01935v1#bib.bib31)), which preserve the high-quality rendering of 3DGS, while trained on a large-scale dataset of multiple identities, enabling generalization to unseen human faces. However, generic models learn a general domain prior and usually fail to produce unique identity-specific facial details, such as wrinkles or tattoos, as studied in this paper.
81
+
82
+ Personalization. Numerous works have proposed ways to adapt pre-trained models to various downstream tasks([houlsby2019parameter,](https://arxiv.org/html/2506.01935v1#bib.bib25); [zhang2023adding,](https://arxiv.org/html/2506.01935v1#bib.bib61)). Parameter-efficient fine-tuning (PEFT) techniques are proposed to fine-tune large models efficiently. LoRA([hu2022lora,](https://arxiv.org/html/2506.01935v1#bib.bib26)) adds low-rank matrices into each layer of a pre-trained model, leading to a significant decrease of the learnable parameters and on-par performance compared to fine-tuning the entire model. In the context of face animation, fine-tuning part of the model has been utilized([chatziagapi2024mi,](https://arxiv.org/html/2506.01935v1#bib.bib7); [li2025instag,](https://arxiv.org/html/2506.01935v1#bib.bib32)), as well as meta-learning. For instance, MetaPortrait([zhang2023metaportrait,](https://arxiv.org/html/2506.01935v1#bib.bib60)) adopts a meta-learning approach to allow adaptation during inference, while [gao2020portrait](https://arxiv.org/html/2506.01935v1#bib.bib19) uses meta-learning to adapt a NeRF to a single image of an unseen subject. Moreover, MyStyle([nitzan2022mystyle,](https://arxiv.org/html/2506.01935v1#bib.bib36)) personalizes a pre-trained StyleGAN by fine-tuning regions of its latent space, using a set of images from an individual. Similarly, One2Avatar([yu2024one2avatar,](https://arxiv.org/html/2506.01935v1#bib.bib59)) adapts a generic NeRF to one or a few images of a person. TalkLoRA([saunders2024talklora,](https://arxiv.org/html/2506.01935v1#bib.bib46)) applies LoRA for the task of 3D mesh animation, while My3DGen([qi2025my3dgen,](https://arxiv.org/html/2506.01935v1#bib.bib43)) adapts LoRA to the convolutional layers of StyleGAN2 in an EG3D-based network([chan2022efficient,](https://arxiv.org/html/2506.01935v1#bib.bib5)). Due to its popularity and efficiency, we study LoRA for our generic model for head avatar animation. However, we find that LoRA is not sufficient to capture high-frequency facial details of a new identity. Thus, we propose to improve personalization by learning an additional register module, inspired by register tokens in ViTs.
83
+
84
+ Additional Tokens in Neural Networks. Memory augmentation in neural networks goes back to long short-term memory (LSTM) units([hochreiter1997long,](https://arxiv.org/html/2506.01935v1#bib.bib24)) that store information through gates. Memory networks([weston2014memory,](https://arxiv.org/html/2506.01935v1#bib.bib54); [sukhbaatar2015end,](https://arxiv.org/html/2506.01935v1#bib.bib48)) have access to external long-term memory. More recently, transformers have emerged as a powerful representation for various deep learning tasks, where the core element is self-attention([vaswani2017attention,](https://arxiv.org/html/2506.01935v1#bib.bib50)). For language modeling, many works extend the input sequence of transformers with special tokens. Such additional tokens provide the network with new information, e.g.[SEP] in BERT([devlin-etal-2019-bert,](https://arxiv.org/html/2506.01935v1#bib.bib13)), or gather information for later downstream tasks, e.g.[CLS] tokens([dosovitskiy2021an,](https://arxiv.org/html/2506.01935v1#bib.bib15)), or [MASK] for generative modeling([bao2021beit,](https://arxiv.org/html/2506.01935v1#bib.bib1)). Unlike these works, [darcet2023vision](https://arxiv.org/html/2506.01935v1#bib.bib11) present additional tokens as registers for storing and repurposing global information. Inspired by this, we extend registers to a 3D feature space for human faces. We learn a Register Module that stores information about distinctive high-frequency details of a human face.
85
+
86
+ 3 Proposed Method
87
+ -----------------
88
+
89
+ ![Image 2: Refer to caption](https://arxiv.org/html/2506.01935v1/extracted/6493591/imgs/full_diagram_simplified.png)
90
+
91
+ Figure 2: Illustration of our Register Module in a generic avatar animation model. During adaptation, we pass the source image’s DINOv2 features f d⁒e⁒n⁒s⁒e s⁒r⁒c subscript superscript 𝑓 𝑠 π‘Ÿ 𝑐 𝑑 𝑒 𝑛 𝑠 𝑒 f^{src}_{dense}italic_f start_POSTSUPERSCRIPT italic_s italic_r italic_c end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_d italic_e italic_n italic_s italic_e end_POSTSUBSCRIPT and driving image’s 3DMM parameters to our module. Our module teaches the model to attend to specific regions in the dense DINOv2 features, thus providing better learning signals for LoRA to capture identity-specific details. Note that the Register Module is not needed during inference but serves as the register during LoRA training.
92
+
93
+ Figure[2](https://arxiv.org/html/2506.01935v1#S3.F2 "Figure 2 β€£ 3 Proposed Method β€£ Low-Rank Head Avatar Personalization with Registers") illustrates an overview of our proposed framework that adapts a general avatar model to a particular identity. Inspired by Parameter-Efficient Fine-Tuning (PEFT), specifically LoRA ([hu2022lora,](https://arxiv.org/html/2506.01935v1#bib.bib26)), we utilize LoRA to adapt the weights of a generalized avatar model to a particular identity. Through experiments, initially we find that adapting with LoRA does not sufficiently improve personalization (see Figure[1](https://arxiv.org/html/2506.01935v1#S1.F1 "Figure 1 β€£ 1 Introduction β€£ Low-Rank Head Avatar Personalization with Registers")). Motivated by [darcet2023vision](https://arxiv.org/html/2506.01935v1#bib.bib11) that introduce registers in ViTs to store and repurpose global information, we propose a Register Module to store information about identity-specific details. Our Register Module essentially teaches the model to attend to specific regions in the dense DINOv2 features during adaptation. Importantly, it is only used during adaptation and is deactivated at inference time. With its guidance, the model learns to leverage DINOv2 features more effectively, enabling high-quality personalized head avatar generation with real-time speed from a single source image at inference.
94
+
95
+ Our proposed pipeline for personalizing a generic avatar animation model consists of two main components:
96
+
97
+ (1) We add LoRA weights to specific pre-trained layers of a generic avatar animation model (see Sec.[3.1](https://arxiv.org/html/2506.01935v1#S3.SS1 "3.1 Preliminaries: Generic Avatar Generation β€£ 3 Proposed Method β€£ Low-Rank Head Avatar Personalization with Registers")), to keep the adaptation parameters efficient and to avoid catastrophic forgetting (see Sec.[3.2](https://arxiv.org/html/2506.01935v1#S3.SS2 "3.2 LoRA for Fast Adaptation β€£ 3 Proposed Method β€£ Low-Rank Head Avatar Personalization with Registers")).
98
+
99
+ (2) We design a Register Module that learns a 3D feature space, facilitating the attention to specific regions of DINOv2 features, while adapting to a face from multiple views (see Sec.[3.3](https://arxiv.org/html/2506.01935v1#S3.SS3 "3.3 Register Module β€£ 3 Proposed Method β€£ Low-Rank Head Avatar Personalization with Registers")).
100
+
101
+ We first describe a generic avatar animation model in Sec.[3.1](https://arxiv.org/html/2506.01935v1#S3.SS1 "3.1 Preliminaries: Generic Avatar Generation β€£ 3 Proposed Method β€£ Low-Rank Head Avatar Personalization with Registers"). Next, we describe the process of adding LoRA weights to pre-trained layers in Sec.[3.2](https://arxiv.org/html/2506.01935v1#S3.SS2 "3.2 LoRA for Fast Adaptation β€£ 3 Proposed Method β€£ Low-Rank Head Avatar Personalization with Registers"). Finally, we describe the architecture of our Register Module in Sec.[3.3](https://arxiv.org/html/2506.01935v1#S3.SS3 "3.3 Register Module β€£ 3 Proposed Method β€£ Low-Rank Head Avatar Personalization with Registers").
102
+
103
+ ### 3.1 Preliminaries: Generic Avatar Generation
104
+
105
+ An avatar generation model consists of two branches: (a) reconstruction branch, and (b) expression branch. The reconstruction branch generates an animatable head avatar from the source image. The expression branch extracts the expressions and pose from the driving image which is used to animate the generated head avatar. These branches are merged and the output is rendered using a neural renderer. This process learns a model for generalized head avatar reconstruction (see Figure [2](https://arxiv.org/html/2506.01935v1#S3.F2 "Figure 2 β€£ 3 Proposed Method β€£ Low-Rank Head Avatar Personalization with Registers")).
106
+
107
+ In particular, we use GAGAvatar([chu2024gagavatar,](https://arxiv.org/html/2506.01935v1#bib.bib9)) as our generic model trained on a large-scale dataset using the general DINOv2 features in its reconstruction branch, which is suitable to serve as our foundation model for fast adaptation. While DINOv2 features are robust for generic tasks, they may contain irrelevant information for avatar generation. Thus, adapting the layers of the generic avatar model to focus on relevant information within the DINOv2 feature space is necessary. We propose our Register Module for this purpose in the following sections.
108
+
109
+ ### 3.2 LoRA for Fast Adaptation
110
+
111
+ To adapt to a particular identity, inspired by the literature in NLP, we use LoRA([hu2022lora,](https://arxiv.org/html/2506.01935v1#bib.bib26)) for adaptation in a parameter-efficient manner. For a pre-trained weight matrix Wβˆˆβ„ mΓ—n π‘Š superscript ℝ π‘š 𝑛 W\in\mathbb{R}^{m\times n}italic_W ∈ blackboard_R start_POSTSUPERSCRIPT italic_m Γ— italic_n end_POSTSUPERSCRIPT, LoRA models the adapted weights W a⁒d⁒a⁒p⁒t subscript π‘Š π‘Ž 𝑑 π‘Ž 𝑝 𝑑 W_{adapt}italic_W start_POSTSUBSCRIPT italic_a italic_d italic_a italic_p italic_t end_POSTSUBSCRIPT by representing it as an addition of the pretrained weights W π‘Š W italic_W and an offset matrix Δ⁒W Ξ” π‘Š\Delta W roman_Ξ” italic_W, the latter of which is low-rank decomposable.
112
+
113
+ W a⁒d⁒a⁒p⁒t=W+Δ⁒W=W+B⁒A,subscript π‘Š π‘Ž 𝑑 π‘Ž 𝑝 𝑑 π‘Š Ξ” π‘Š π‘Š 𝐡 𝐴 W_{adapt}=W+\Delta W=W+BA,italic_W start_POSTSUBSCRIPT italic_a italic_d italic_a italic_p italic_t end_POSTSUBSCRIPT = italic_W + roman_Ξ” italic_W = italic_W + italic_B italic_A ,(1)
114
+
115
+ where Bβˆˆβ„ mΓ—r 𝐡 superscript ℝ π‘š π‘Ÿ B\in\mathbb{R}^{m\times r}italic_B ∈ blackboard_R start_POSTSUPERSCRIPT italic_m Γ— italic_r end_POSTSUPERSCRIPT and Aβˆˆβ„ rΓ—n 𝐴 superscript ℝ π‘Ÿ 𝑛 A\in\mathbb{R}^{r\times n}italic_A ∈ blackboard_R start_POSTSUPERSCRIPT italic_r Γ— italic_n end_POSTSUPERSCRIPT and rβ‰ͺm⁒i⁒n⁒(m,n)much-less-than π‘Ÿ π‘š 𝑖 𝑛 π‘š 𝑛 r\ll min(m,n)italic_r β‰ͺ italic_m italic_i italic_n ( italic_m , italic_n ). During adaptation, only parameters in A 𝐴 A italic_A and B 𝐡 B italic_B receive gradients. For our purpose, we add LoRA weights A 𝐴 A italic_A and B 𝐡 B italic_B to each parameter matrix in a pre-trained avatar model, except the DINOv2 model. In our implementation, for all experiments except ablations described in the supplementary material, we use the same rank r=32 π‘Ÿ 32 r=32 italic_r = 32 for all comparisons.
116
+
117
+ ### 3.3 Register Module
118
+
119
+ ![Image 3: Refer to caption](https://arxiv.org/html/2506.01935v1/extracted/6493591/imgs/main_diagram_latex_names_beaut.png)
120
+
121
+ Figure 3: Illustration of our Register Module. We propose a Register Module that learns features in the 3D space. We rig embeddings to vertices on a 3DMM mesh and use camera pose p⁒o⁒s⁒e D 𝑝 π‘œ 𝑠 subscript 𝑒 𝐷 pose_{D}italic_p italic_o italic_s italic_e start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT to project visible vertices and their embeddings onto a camera plane. Next, we interpolate features in the face mask region, fill in the background feature. Finally, we add these features to source image’s DINOv2 features f d⁒e⁒n⁒s⁒e s⁒r⁒c subscript superscript 𝑓 𝑠 π‘Ÿ 𝑐 𝑑 𝑒 𝑛 𝑠 𝑒 f^{src}_{dense}italic_f start_POSTSUPERSCRIPT italic_s italic_r italic_c end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_d italic_e italic_n italic_s italic_e end_POSTSUBSCRIPT to improve learning signals for LoRA.
122
+
123
+ Figure [3](https://arxiv.org/html/2506.01935v1#S3.F3 "Figure 3 β€£ 3.3 Register Module β€£ 3 Proposed Method β€£ Low-Rank Head Avatar Personalization with Registers") illustrates the design of our Register Module. We hypothesize that, in addition to adding LoRA weights, we need a mechanism for better extraction of fine details of an identity, such as tattoos, wrinkles, muscular idiosyncrasies, and other personal features of an identity. To this end, we introduce a Register Module to improve the focus on identity-specific details.
124
+
125
+ Feature Learning Procedure. Specifically, we propose to highlight detailed information in DINOv2 local features of the source image with the output of the Register Module. Let M=(V,E,F)𝑀 𝑉 𝐸 𝐹 M=(V,E,F)italic_M = ( italic_V , italic_E , italic_F ) be a 3DMM mesh ([FLAME:SiggraphAsia2017,](https://arxiv.org/html/2506.01935v1#bib.bib33)), where V 𝑉 V italic_V is the set of vertices, n⁒(V)𝑛 𝑉 n(V)italic_n ( italic_V ) is the number of vertices, E 𝐸 E italic_E is the set of edges and F 𝐹 F italic_F is the set of facets. In our Register Module, we rig embeddings eβˆˆβ„ n⁒(V)Γ—D 𝑒 superscript ℝ 𝑛 𝑉 𝐷 e\in\mathbb{R}^{n(V)\times D}italic_e ∈ blackboard_R start_POSTSUPERSCRIPT italic_n ( italic_V ) Γ— italic_D end_POSTSUPERSCRIPT, where D is the dimension of the embeddings, to vertices v∈V 𝑣 𝑉 v\in V italic_v ∈ italic_V of mesh M 𝑀 M italic_M. Given a driving image camera pose and position p⁒o⁒s⁒e D 𝑝 π‘œ 𝑠 subscript 𝑒 𝐷 pose_{D}italic_p italic_o italic_s italic_e start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT, we compute the set of visible vertices UβŠ‚V π‘ˆ 𝑉 U\subset V italic_U βŠ‚ italic_V of the mesh M 𝑀 M italic_M from
126
+
127
+ U=visible⁒(M,p⁒o⁒s⁒e D).π‘ˆ visible 𝑀 𝑝 π‘œ 𝑠 subscript 𝑒 𝐷 U=\texttt{visible}(M,pose_{D})\;.italic_U = visible ( italic_M , italic_p italic_o italic_s italic_e start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT ) .(2)
128
+
129
+ Next, we project these points U π‘ˆ U italic_U to a feature space in the camera plane S={(i,j)βˆˆβ„€ 2|1≀i≀H,1≀j≀W}𝑆 conditional-set 𝑖 𝑗 superscript β„€ 2 formulae-sequence 1 𝑖 𝐻 1 𝑗 π‘Š S=\{(i,j)\in\mathbb{Z}^{2}|1\leq i\leq H,1\leq j\leq W\}italic_S = { ( italic_i , italic_j ) ∈ blackboard_Z start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT | 1 ≀ italic_i ≀ italic_H , 1 ≀ italic_j ≀ italic_W } and their corresponding embeddings to a dense feature f Sβˆˆβ„ HΓ—WΓ—D subscript 𝑓 𝑆 superscript ℝ 𝐻 π‘Š 𝐷 f_{S}\in\mathbb{R}^{H\times W\times D}italic_f start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_H Γ— italic_W Γ— italic_D end_POSTSUPERSCRIPT, where H,W 𝐻 π‘Š H,W italic_H , italic_W are the parameters of the image size, using a Perspective Projection:
130
+
131
+ U S=perspective_project⁒(U,p⁒o⁒s⁒e D,K⁒(S)),subscript π‘ˆ 𝑆 perspective_project π‘ˆ 𝑝 π‘œ 𝑠 subscript 𝑒 𝐷 𝐾 𝑆 U_{S}=\texttt{perspective\_project}(U,pose_{D},K(S))\;,italic_U start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT = perspective_project ( italic_U , italic_p italic_o italic_s italic_e start_POSTSUBSCRIPT italic_D end_POSTSUBSCRIPT , italic_K ( italic_S ) ) ,(3)
132
+
133
+ where K⁒(S)𝐾 𝑆 K(S)italic_K ( italic_S ) is the intrinsic camera matrix for a camera with S 𝑆 S italic_S as the camera plane. The projected points are rounded off to the nearest integer. At points in the feature plane where a visible vertex v∈U 𝑣 π‘ˆ v\in U italic_v ∈ italic_U is projected to, we assign the corresponding point’s embedding from e 𝑒 e italic_e. Hence the operation becomes:
134
+
135
+ f S⁒[u S i]:=e⁒[u i]⁒for⁒u i∈U⁒and⁒u S i⁒is the projection of⁒u i.assign subscript 𝑓 𝑆 delimited-[]subscript superscript 𝑒 𝑖 𝑆 𝑒 delimited-[]superscript 𝑒 𝑖 for superscript 𝑒 𝑖 π‘ˆ and subscript superscript 𝑒 𝑖 𝑆 is the projection of superscript 𝑒 𝑖 f_{S}[u^{i}_{S}]:=e[u^{i}]\text{ for }u^{i}\in U\text{ and }u^{i}_{S}\text{ is% the projection of }u^{i}\;.italic_f start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT [ italic_u start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT ] := italic_e [ italic_u start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ] for italic_u start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT ∈ italic_U and italic_u start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT is the projection of italic_u start_POSTSUPERSCRIPT italic_i end_POSTSUPERSCRIPT .(4)
136
+
137
+ Given the set of points U s subscript π‘ˆ 𝑠 U_{s}italic_U start_POSTSUBSCRIPT italic_s end_POSTSUBSCRIPT on the camera plane, we compute an alpha shape ([alpha_shape,](https://arxiv.org/html/2506.01935v1#bib.bib17)) to find the simple contour polygon P U S subscript 𝑃 subscript π‘ˆ 𝑆 P_{U_{S}}italic_P start_POSTSUBSCRIPT italic_U start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT end_POSTSUBSCRIPT of the vertex projections. Let interior⁒(P)interior 𝑃\texttt{interior}(P)interior ( italic_P ) represent all the points inside a simple polygon P 𝑃 P italic_P. For each point p∈interior⁒(P U S)⁒and⁒pβˆ‰U S 𝑝 interior subscript 𝑃 subscript π‘ˆ 𝑆 and 𝑝 subscript π‘ˆ 𝑆 p\in\texttt{interior}(P_{U_{S}})\text{ and }p\notin U_{S}italic_p ∈ interior ( italic_P start_POSTSUBSCRIPT italic_U start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT end_POSTSUBSCRIPT ) and italic_p βˆ‰ italic_U start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT, we compute k π‘˜ k italic_k nearest points in U S subscript π‘ˆ 𝑆 U_{S}italic_U start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT, and do inverse distance weighted interpolation for point p. Mathematically, interpolated feature e p subscript 𝑒 𝑝 e_{p}italic_e start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT for point p∈interior⁒(P U S)⁒and⁒pβˆ‰U S 𝑝 interior subscript 𝑃 subscript π‘ˆ 𝑆 and 𝑝 subscript π‘ˆ 𝑆 p\in\texttt{interior}(P_{U_{S}})\text{ and }p\notin U_{S}italic_p ∈ interior ( italic_P start_POSTSUBSCRIPT italic_U start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT end_POSTSUBSCRIPT ) and italic_p βˆ‰ italic_U start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT is defined as
138
+
139
+ f S⁒[p]:=e p=οΏ½οΏ½i=1 k 1 d i⁒e v iβˆ‘i=1 k 1 d i,assign subscript 𝑓 𝑆 delimited-[]𝑝 subscript 𝑒 𝑝 subscript superscript π‘˜ 𝑖 1 1 subscript 𝑑 𝑖 subscript 𝑒 subscript 𝑣 𝑖 subscript superscript π‘˜ 𝑖 1 1 subscript 𝑑 𝑖 f_{S}[p]:=e_{p}=\frac{\sum^{k}_{i=1}\frac{1}{d_{i}}e_{v_{i}}}{\sum^{k}_{i=1}% \frac{1}{d_{i}}}\;,italic_f start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT [ italic_p ] := italic_e start_POSTSUBSCRIPT italic_p end_POSTSUBSCRIPT = divide start_ARG βˆ‘ start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT divide start_ARG 1 end_ARG start_ARG italic_d start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG italic_e start_POSTSUBSCRIPT italic_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_POSTSUBSCRIPT end_ARG start_ARG βˆ‘ start_POSTSUPERSCRIPT italic_k end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_i = 1 end_POSTSUBSCRIPT divide start_ARG 1 end_ARG start_ARG italic_d start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT end_ARG end_ARG ,(5)
140
+
141
+ where {v i:i∈{1,…,k},v i∈U S}βŠ‚U S conditional-set subscript 𝑣 𝑖 formulae-sequence 𝑖 1β€¦π‘˜ subscript 𝑣 𝑖 subscript π‘ˆ 𝑆 subscript π‘ˆ 𝑆\{v_{i}:i\in\{1,...,k\},v_{i}\in U_{S}\}\subset U_{S}{ italic_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT : italic_i ∈ { 1 , … , italic_k } , italic_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT ∈ italic_U start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT } βŠ‚ italic_U start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT is the set of k π‘˜ k italic_k nearest projected vertices and d i=β€–pβˆ’v iβ€–2 subscript 𝑑 𝑖 subscript norm 𝑝 subscript 𝑣 𝑖 2 d_{i}=||p-v_{i}||_{2}italic_d start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT = | | italic_p - italic_v start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | | start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT. For points p∈S⁒and⁒pβˆ‰interior⁒(P U S)𝑝 𝑆 and 𝑝 interior subscript 𝑃 subscript π‘ˆ 𝑆 p\in S\text{ and }p\notin\texttt{interior}(P_{U_{S}})italic_p ∈ italic_S and italic_p βˆ‰ interior ( italic_P start_POSTSUBSCRIPT italic_U start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT end_POSTSUBSCRIPT ), we assign an feature e b subscript 𝑒 𝑏 e_{b}italic_e start_POSTSUBSCRIPT italic_b end_POSTSUBSCRIPT. This results in a dense constructed feature f Sβˆˆβ„ HΓ—WΓ—D subscript 𝑓 𝑆 superscript ℝ 𝐻 π‘Š 𝐷 f_{S}\in\mathbb{R}^{H\times W\times D}italic_f start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_H Γ— italic_W Γ— italic_D end_POSTSUPERSCRIPT from assigned features at each point in S 𝑆 S italic_S. We further process these features using a CNN-based encoder E p⁒r⁒o⁒cβˆ’1 subscript 𝐸 𝑝 π‘Ÿ π‘œ 𝑐 1 E_{proc-1}italic_E start_POSTSUBSCRIPT italic_p italic_r italic_o italic_c - 1 end_POSTSUBSCRIPT.
142
+
143
+ f p⁒r⁒o⁒cβˆ’1=E p⁒r⁒o⁒cβˆ’1⁒(f S),subscript 𝑓 𝑝 π‘Ÿ π‘œ 𝑐 1 subscript 𝐸 𝑝 π‘Ÿ π‘œ 𝑐 1 subscript 𝑓 𝑆 f_{proc-1}=E_{proc-1}(f_{S})\;,italic_f start_POSTSUBSCRIPT italic_p italic_r italic_o italic_c - 1 end_POSTSUBSCRIPT = italic_E start_POSTSUBSCRIPT italic_p italic_r italic_o italic_c - 1 end_POSTSUBSCRIPT ( italic_f start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT ) ,(6)
144
+
145
+ where f p⁒r⁒o⁒cβˆ’1βˆˆβ„ HΓ—WΓ—D o⁒u⁒t subscript 𝑓 𝑝 π‘Ÿ π‘œ 𝑐 1 superscript ℝ 𝐻 π‘Š subscript 𝐷 π‘œ 𝑒 𝑑 f_{proc-1}\in\mathbb{R}^{H\times W\times D_{out}}italic_f start_POSTSUBSCRIPT italic_p italic_r italic_o italic_c - 1 end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_H Γ— italic_W Γ— italic_D start_POSTSUBSCRIPT italic_o italic_u italic_t end_POSTSUBSCRIPT end_POSTSUPERSCRIPT.
146
+
147
+ We add these features to the source image’s dense DINOv2 features f d⁒e⁒n⁒s⁒e s⁒r⁒cβˆˆβ„ HΓ—WΓ—D o⁒u⁒t subscript superscript 𝑓 𝑠 π‘Ÿ 𝑐 𝑑 𝑒 𝑛 𝑠 𝑒 superscript ℝ 𝐻 π‘Š subscript 𝐷 π‘œ 𝑒 𝑑 f^{src}_{dense}\in\mathbb{R}^{H\times W\times D_{out}}italic_f start_POSTSUPERSCRIPT italic_s italic_r italic_c end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_d italic_e italic_n italic_s italic_e end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_H Γ— italic_W Γ— italic_D start_POSTSUBSCRIPT italic_o italic_u italic_t end_POSTSUBSCRIPT end_POSTSUPERSCRIPT and process the result with another CNN-based encoder E p⁒r⁒o⁒cβˆ’2 subscript 𝐸 𝑝 π‘Ÿ π‘œ 𝑐 2 E_{proc-2}italic_E start_POSTSUBSCRIPT italic_p italic_r italic_o italic_c - 2 end_POSTSUBSCRIPT. Mathematically, the output of our Register Module f r⁒e⁒g subscript 𝑓 π‘Ÿ 𝑒 𝑔 f_{reg}italic_f start_POSTSUBSCRIPT italic_r italic_e italic_g end_POSTSUBSCRIPT is
148
+
149
+ f r⁒e⁒g=E p⁒r⁒o⁒cβˆ’2⁒(f d⁒e⁒n⁒s⁒e s⁒r⁒c+f p⁒r⁒o⁒cβˆ’1).subscript 𝑓 π‘Ÿ 𝑒 𝑔 subscript 𝐸 𝑝 π‘Ÿ π‘œ 𝑐 2 subscript superscript 𝑓 𝑠 π‘Ÿ 𝑐 𝑑 𝑒 𝑛 𝑠 𝑒 subscript 𝑓 𝑝 π‘Ÿ π‘œ 𝑐 1 f_{reg}=E_{proc-2}(f^{src}_{dense}+f_{proc-1})\;.italic_f start_POSTSUBSCRIPT italic_r italic_e italic_g end_POSTSUBSCRIPT = italic_E start_POSTSUBSCRIPT italic_p italic_r italic_o italic_c - 2 end_POSTSUBSCRIPT ( italic_f start_POSTSUPERSCRIPT italic_s italic_r italic_c end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_d italic_e italic_n italic_s italic_e end_POSTSUBSCRIPT + italic_f start_POSTSUBSCRIPT italic_p italic_r italic_o italic_c - 1 end_POSTSUBSCRIPT ) .(7)
150
+
151
+ In our implementation, we use H=W=296 𝐻 π‘Š 296 H=W=296 italic_H = italic_W = 296 and D o⁒u⁒t=256 subscript 𝐷 π‘œ 𝑒 𝑑 256 D_{out}=256 italic_D start_POSTSUBSCRIPT italic_o italic_u italic_t end_POSTSUBSCRIPT = 256 as in [chu2024gagavatar](https://arxiv.org/html/2506.01935v1#bib.bib9). We set k=11 π‘˜ 11 k=11 italic_k = 11 and D=512 𝐷 512 D=512 italic_D = 512 for all our comparisons.
152
+
153
+ Objective Functions. In order to make sure that the Register Module learns meaningful features, we constrain the training with two losses. First, we use the MSE loss between the driving image’s DINOv2 features (f d⁒e⁒n⁒s⁒e d⁒r⁒i subscript superscript 𝑓 𝑑 π‘Ÿ 𝑖 𝑑 𝑒 𝑛 𝑠 𝑒 f^{dri}_{dense}italic_f start_POSTSUPERSCRIPT italic_d italic_r italic_i end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_d italic_e italic_n italic_s italic_e end_POSTSUBSCRIPT) and output of the Register Module f r⁒e⁒g subscript 𝑓 π‘Ÿ 𝑒 𝑔 f_{reg}italic_f start_POSTSUBSCRIPT italic_r italic_e italic_g end_POSTSUBSCRIPT.
154
+
155
+ L f⁒e⁒a⁒t=β€–f d⁒e⁒n⁒s⁒e d⁒r⁒iβˆ’f r⁒e⁒gβ€–2 2.subscript 𝐿 𝑓 𝑒 π‘Ž 𝑑 subscript superscript norm subscript superscript 𝑓 𝑑 π‘Ÿ 𝑖 𝑑 𝑒 𝑛 𝑠 𝑒 subscript 𝑓 π‘Ÿ 𝑒 𝑔 2 2 L_{feat}=||f^{dri}_{dense}-f_{reg}||^{2}_{2}\;.italic_L start_POSTSUBSCRIPT italic_f italic_e italic_a italic_t end_POSTSUBSCRIPT = | | italic_f start_POSTSUPERSCRIPT italic_d italic_r italic_i end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_d italic_e italic_n italic_s italic_e end_POSTSUBSCRIPT - italic_f start_POSTSUBSCRIPT italic_r italic_e italic_g end_POSTSUBSCRIPT | | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT start_POSTSUBSCRIPT 2 end_POSTSUBSCRIPT .(8)
156
+
157
+ Next, to ensure that the features learned in the Register Module are similar to each other, we regularize the embeddings e 𝑒 e italic_e. This is enforced by
158
+
159
+ L r⁒e⁒g=pcos⁒(e)n⁒(V)⁒(n⁒(V)βˆ’1),subscript 𝐿 π‘Ÿ 𝑒 𝑔 pcos 𝑒 𝑛 𝑉 𝑛 𝑉 1 L_{reg}=\frac{\texttt{pcos}(e)}{n(V)(n(V)-1)}\;,italic_L start_POSTSUBSCRIPT italic_r italic_e italic_g end_POSTSUBSCRIPT = divide start_ARG pcos ( italic_e ) end_ARG start_ARG italic_n ( italic_V ) ( italic_n ( italic_V ) - 1 ) end_ARG ,(9)
160
+
161
+ where pcos⁒(X)=βˆ‘iβˆ‘j X iβ‹…X jβ€–X i‖⁒‖X jβ€–βˆ’n⁒(V)pcos 𝑋 subscript 𝑖 subscript 𝑗⋅subscript 𝑋 𝑖 subscript 𝑋 𝑗 norm subscript 𝑋 𝑖 norm subscript 𝑋 𝑗 𝑛 𝑉\texttt{pcos}(X)=\sum_{i}\sum_{j}\frac{X_{i}\cdot X_{j}}{||X_{i}||||X_{j}||}-n% (V)pcos ( italic_X ) = βˆ‘ start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT βˆ‘ start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT divide start_ARG italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT β‹… italic_X start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT end_ARG start_ARG | | italic_X start_POSTSUBSCRIPT italic_i end_POSTSUBSCRIPT | | | | italic_X start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT | | end_ARG - italic_n ( italic_V ) is the sum of the non-diagonal elements of a self-pairwise cosine distance. We use a weighted combination of L f⁒e⁒a⁒t subscript 𝐿 𝑓 𝑒 π‘Ž 𝑑 L_{feat}italic_L start_POSTSUBSCRIPT italic_f italic_e italic_a italic_t end_POSTSUBSCRIPT and L r⁒e⁒g subscript 𝐿 π‘Ÿ 𝑒 𝑔 L_{reg}italic_L start_POSTSUBSCRIPT italic_r italic_e italic_g end_POSTSUBSCRIPT with weights Ξ» f⁒e⁒a⁒t subscript πœ† 𝑓 𝑒 π‘Ž 𝑑\lambda_{feat}italic_Ξ» start_POSTSUBSCRIPT italic_f italic_e italic_a italic_t end_POSTSUBSCRIPT and Ξ» r⁒e⁒g subscript πœ† π‘Ÿ 𝑒 𝑔\lambda_{reg}italic_Ξ» start_POSTSUBSCRIPT italic_r italic_e italic_g end_POSTSUBSCRIPT. Together, they form L r⁒e⁒g⁒i⁒s⁒t⁒e⁒r=Ξ» f⁒e⁒a⁒t⁒L f⁒e⁒a⁒t+Ξ» r⁒e⁒g⁒L r⁒e⁒g subscript 𝐿 π‘Ÿ 𝑒 𝑔 𝑖 𝑠 𝑑 𝑒 π‘Ÿ subscript πœ† 𝑓 𝑒 π‘Ž 𝑑 subscript 𝐿 𝑓 𝑒 π‘Ž 𝑑 subscript πœ† π‘Ÿ 𝑒 𝑔 subscript 𝐿 π‘Ÿ 𝑒 𝑔 L_{register}=\lambda_{feat}L_{feat}+\lambda_{reg}L_{reg}italic_L start_POSTSUBSCRIPT italic_r italic_e italic_g italic_i italic_s italic_t italic_e italic_r end_POSTSUBSCRIPT = italic_Ξ» start_POSTSUBSCRIPT italic_f italic_e italic_a italic_t end_POSTSUBSCRIPT italic_L start_POSTSUBSCRIPT italic_f italic_e italic_a italic_t end_POSTSUBSCRIPT + italic_Ξ» start_POSTSUBSCRIPT italic_r italic_e italic_g end_POSTSUBSCRIPT italic_L start_POSTSUBSCRIPT italic_r italic_e italic_g end_POSTSUBSCRIPT. In our implementation, we set Ξ» r⁒e⁒g=20 subscript πœ† π‘Ÿ 𝑒 𝑔 20\lambda_{reg}=20 italic_Ξ» start_POSTSUBSCRIPT italic_r italic_e italic_g end_POSTSUBSCRIPT = 20 and Ξ» f⁒e⁒a⁒t=2 subscript πœ† 𝑓 𝑒 π‘Ž 𝑑 2\lambda_{feat}=2 italic_Ξ» start_POSTSUBSCRIPT italic_f italic_e italic_a italic_t end_POSTSUBSCRIPT = 2.
162
+
163
+ Avatar Adaptation and Generation. To adapt to a particular identity, we pick the first frame of a video as the source image and select a random frame as the driving image. Next, we predict the Register Module’s output f r⁒e⁒g subscript 𝑓 π‘Ÿ 𝑒 𝑔 f_{reg}italic_f start_POSTSUBSCRIPT italic_r italic_e italic_g end_POSTSUBSCRIPT from the source image’s DINOv2 features f d⁒e⁒n⁒s⁒e s⁒r⁒c subscript superscript 𝑓 𝑠 π‘Ÿ 𝑐 𝑑 𝑒 𝑛 𝑠 𝑒 f^{src}_{dense}italic_f start_POSTSUPERSCRIPT italic_s italic_r italic_c end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_d italic_e italic_n italic_s italic_e end_POSTSUBSCRIPT and driving image’s 3DMM parameters. Next, we pass the driving image’s 3DMM parameters to the expression branch and f r⁒e⁒g subscript 𝑓 π‘Ÿ 𝑒 𝑔 f_{reg}italic_f start_POSTSUBSCRIPT italic_r italic_e italic_g end_POSTSUBSCRIPT to the reconstruction branch. The outputs of the corresponding branches are merged to produce a coarse image. A neural renderer then produces the fine image. During inference, we skip the Register Module and directly pass the source image’s DINOv2 features f d⁒e⁒n⁒s⁒e s⁒r⁒c subscript superscript 𝑓 𝑠 π‘Ÿ 𝑐 𝑑 𝑒 𝑛 𝑠 𝑒 f^{src}_{dense}italic_f start_POSTSUPERSCRIPT italic_s italic_r italic_c end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_d italic_e italic_n italic_s italic_e end_POSTSUBSCRIPT to the reconstruction branch, while the rest of the process is the same as the training stage.
164
+
165
+ 4 Experiments
166
+ -------------
167
+
168
+ ### 4.1 Dataset Collection
169
+
170
+ We propose a new dataset, namely RareFace-50. Prior work uses datasets with a large number of identities, e.g.VFHQ([xie2022vfhq,](https://arxiv.org/html/2506.01935v1#bib.bib55)). However, these datasets mostly include videos of celebrities and well-known faces from television. Thus, they might lack in diversity in terms of age and high-frequency facial details, such as wrinkles or unique tattoos (see Figure[4](https://arxiv.org/html/2506.01935v1#S4.F4 "Figure 4 β€£ 4.2 Learned Features β€£ 4 Experiments β€£ Low-Rank Head Avatar Personalization with Registers")). These underrepresented human faces are difficult to faithfully generate by generic networks, such as GAGAvatar([chu2024gagavatar,](https://arxiv.org/html/2506.01935v1#bib.bib9)). Identifying this issue in existing datasets, we collect a video dataset of 50 identities with unique facial details from YouTube. The dataset is collected from high-resolution close-up videos shot in 1080p, 2K and 4K formats. We detect faces, crop and resize face images to 512Γ—512 512 512 512\times 512 512 Γ— 512 resolution. The average duration of the videos in this dataset is around 15 seconds, with 2 videos per identity, resulting in the total number of videos in the dataset equal to 100. We intend to publish the dataset for research purposes. In addition to RareFace-50, we also use VFHQ test set to evaluate our method. VFHQ Test consists of 50 high quality videos from 50 different identities cropped and resized to 512Γ—512 512 512 512\times 512 512 Γ— 512 resolution. Each video is around 4 to 10 seconds in diverse poses and settings.
171
+
172
+ We pre-process input videos using the tracking pipeline from [chu2024gagavatar](https://arxiv.org/html/2506.01935v1#bib.bib9). This step provides background-matted input frames, along with its tracked 3DMM parameters (these include view pose, eye pose, jaw pose, FLAME shape and expression parameters). We also pre-compute visible vertices of the 3DMM mesh fitted on a particular frame. After this, we also compute the alpha-shape polygon for the projection of the visible vertices and the set of points that lie within this polygon given the scale of the projection screen size. See more details in the supplementary material.
173
+
174
+ ### 4.2 Learned Features
175
+
176
+ ![Image 4: Refer to caption](https://arxiv.org/html/2506.01935v1/extracted/6493591/imgs/ablation_visualization_2x5_latex_names.png)
177
+
178
+ Figure 4: Visualization of learned features by the Register Module on our RareFace-50 dataset. We visualize 1) source image’s DINOv2 feature f d⁒e⁒n⁒s⁒e s⁒r⁒c subscript superscript 𝑓 𝑠 π‘Ÿ 𝑐 𝑑 𝑒 𝑛 𝑠 𝑒 f^{src}_{dense}italic_f start_POSTSUPERSCRIPT italic_s italic_r italic_c end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_d italic_e italic_n italic_s italic_e end_POSTSUBSCRIPT, 2) f p⁒r⁒o⁒cβˆ’1 subscript 𝑓 𝑝 π‘Ÿ π‘œ 𝑐 1 f_{proc-1}italic_f start_POSTSUBSCRIPT italic_p italic_r italic_o italic_c - 1 end_POSTSUBSCRIPT, output from E p⁒r⁒o⁒cβˆ’1 subscript 𝐸 𝑝 π‘Ÿ π‘œ 𝑐 1 E_{proc-1}italic_E start_POSTSUBSCRIPT italic_p italic_r italic_o italic_c - 1 end_POSTSUBSCRIPT, and 3) f r⁒e⁒g subscript 𝑓 π‘Ÿ 𝑒 𝑔 f_{reg}italic_f start_POSTSUBSCRIPT italic_r italic_e italic_g end_POSTSUBSCRIPT, output of the Register Module. We compute the 2nd channel-wise PCA component and standardize the values. We observe that the Register Module improves the learning signals by highlighting face regions and dampening the background regions.
179
+
180
+ In Figure[4](https://arxiv.org/html/2506.01935v1#S4.F4 "Figure 4 β€£ 4.2 Learned Features β€£ 4 Experiments β€£ Low-Rank Head Avatar Personalization with Registers"), we visualize the learned features of our Register Module. Specifically, we visualize features 1) f p⁒r⁒o⁒cβˆ’1 subscript 𝑓 𝑝 π‘Ÿ π‘œ 𝑐 1 f_{proc-1}italic_f start_POSTSUBSCRIPT italic_p italic_r italic_o italic_c - 1 end_POSTSUBSCRIPT from E p⁒r⁒o⁒cβˆ’1 subscript 𝐸 𝑝 π‘Ÿ π‘œ 𝑐 1 E_{proc-1}italic_E start_POSTSUBSCRIPT italic_p italic_r italic_o italic_c - 1 end_POSTSUBSCRIPT, and 2) f r⁒e⁒g subscript 𝑓 π‘Ÿ 𝑒 𝑔 f_{reg}italic_f start_POSTSUBSCRIPT italic_r italic_e italic_g end_POSTSUBSCRIPT from E p⁒r⁒o⁒cβˆ’2 subscript 𝐸 𝑝 π‘Ÿ π‘œ 𝑐 2 E_{proc-2}italic_E start_POSTSUBSCRIPT italic_p italic_r italic_o italic_c - 2 end_POSTSUBSCRIPT. We compute the 2nd channel-wise PCA component of DINOv2 features, and standardize and visualize using colormap between the range [βˆ’3⁒σ,3⁒σ]3 𝜎 3 𝜎[-3\sigma,3\sigma][ - 3 italic_Οƒ , 3 italic_Οƒ ] for DINOv2 features and [βˆ’Οƒ,Οƒ]𝜎 𝜎[-\sigma,\sigma][ - italic_Οƒ , italic_Οƒ ] for f p⁒r⁒o⁒cβˆ’1 subscript 𝑓 𝑝 π‘Ÿ π‘œ 𝑐 1 f_{proc-1}italic_f start_POSTSUBSCRIPT italic_p italic_r italic_o italic_c - 1 end_POSTSUBSCRIPT and f r⁒e⁒g subscript 𝑓 π‘Ÿ 𝑒 𝑔 f_{reg}italic_f start_POSTSUBSCRIPT italic_r italic_e italic_g end_POSTSUBSCRIPT. Since the DINOv2 model is trained in a self-supervised manner to make features of different augmented views of an input image to be similar on a diverse dataset, we observe features predicted by DINOv2 to have features irrelevant to the task at hand, i.e., representing human faces. Moreover, the addition of the features from f p⁒r⁒o⁒cβˆ’1 subscript 𝑓 𝑝 π‘Ÿ π‘œ 𝑐 1 f_{proc-1}italic_f start_POSTSUBSCRIPT italic_p italic_r italic_o italic_c - 1 end_POSTSUBSCRIPT changes the characteristics of the added DINOv2 features (see the comparison of f d⁒e⁒n⁒s⁒e s⁒r⁒c subscript superscript 𝑓 𝑠 π‘Ÿ 𝑐 𝑑 𝑒 𝑛 𝑠 𝑒 f^{src}_{dense}italic_f start_POSTSUPERSCRIPT italic_s italic_r italic_c end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_d italic_e italic_n italic_s italic_e end_POSTSUBSCRIPT and f r⁒e⁒g subscript 𝑓 π‘Ÿ 𝑒 𝑔 f_{reg}italic_f start_POSTSUBSCRIPT italic_r italic_e italic_g end_POSTSUBSCRIPT in Figure [4](https://arxiv.org/html/2506.01935v1#S4.F4 "Figure 4 β€£ 4.2 Learned Features β€£ 4 Experiments β€£ Low-Rank Head Avatar Personalization with Registers")), changing the distribution in irrelevant regions of the DINOv2 features. In the supplementary material, we show additional visualization results, other PCA components, and an analysis on the norms of DINOv2 and Register Module features indicating that they improve meaningfully.
181
+
182
+ ### 4.3 Ablation Study
183
+
184
+ ![Image 5: Refer to caption](https://arxiv.org/html/2506.01935v1/extracted/6493591/imgs/ablation_figure_2x5_labelled_zoomed.png)
185
+
186
+ Figure 5: Ablation results on the VFHQ Test dataset. We observe that our method performs better in preserving fine-details such as wrinkles and blemishes.
187
+
188
+ Table 1: Ablation study on our proposed Register Module. In (a), we add noise to DINOv2 features during adaptation. In (b), we add learnable embeddings to DINOv2 features during adaptation.
189
+
190
+ We conduct an ablation study on our Register Module comparing our method with variants. This study shows the contribution of our proposed 3D feature space, which extends the idea of registers to 3D human faces. Specifically, in variant (a) Gaussian Noise, we sample gaussian noise G=(G h,w,d)βˆˆβ„ HΓ—WΓ—D o⁒u⁒t 𝐺 subscript 𝐺 β„Ž 𝑀 𝑑 superscript ℝ 𝐻 π‘Š subscript 𝐷 π‘œ 𝑒 𝑑 G=(G_{h,w,d})\in\mathbb{R}^{H\times W\times D_{out}}italic_G = ( italic_G start_POSTSUBSCRIPT italic_h , italic_w , italic_d end_POSTSUBSCRIPT ) ∈ blackboard_R start_POSTSUPERSCRIPT italic_H Γ— italic_W Γ— italic_D start_POSTSUBSCRIPT italic_o italic_u italic_t end_POSTSUBSCRIPT end_POSTSUPERSCRIPT, Z h,w,d⁒∼iid⁒𝒩⁒(0,1)subscript 𝑍 β„Ž 𝑀 𝑑 iid similar-to 𝒩 0 1 Z_{h,w,d}\overset{\mathrm{iid}}{\sim}\mathcal{N}(0,1)italic_Z start_POSTSUBSCRIPT italic_h , italic_w , italic_d end_POSTSUBSCRIPT overroman_iid start_ARG ∼ end_ARG caligraphic_N ( 0 , 1 ) and add this noise G 𝐺 G italic_G to the features f d⁒e⁒n⁒s⁒e s⁒r⁒c subscript superscript 𝑓 𝑠 π‘Ÿ 𝑐 𝑑 𝑒 𝑛 𝑠 𝑒 f^{src}_{dense}italic_f start_POSTSUPERSCRIPT italic_s italic_r italic_c end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_d italic_e italic_n italic_s italic_e end_POSTSUBSCRIPT during the adaptation stage. During inference, we directly use f d⁒e⁒n⁒s⁒e s⁒r⁒c subscript superscript 𝑓 𝑠 π‘Ÿ 𝑐 𝑑 𝑒 𝑛 𝑠 𝑒 f^{src}_{dense}italic_f start_POSTSUPERSCRIPT italic_s italic_r italic_c end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_d italic_e italic_n italic_s italic_e end_POSTSUBSCRIPT. In variant (b) Learnable Embeddings, we add a learnable embedding dictionary e l⁒e⁒a⁒r⁒nβˆˆβ„ HΓ—WΓ—D o⁒u⁒t subscript 𝑒 𝑙 𝑒 π‘Ž π‘Ÿ 𝑛 superscript ℝ 𝐻 π‘Š subscript 𝐷 π‘œ 𝑒 𝑑 e_{learn}\in\mathbb{R}^{H\times W\times D_{out}}italic_e start_POSTSUBSCRIPT italic_l italic_e italic_a italic_r italic_n end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_H Γ— italic_W Γ— italic_D start_POSTSUBSCRIPT italic_o italic_u italic_t end_POSTSUBSCRIPT end_POSTSUPERSCRIPT to f d⁒e⁒n⁒s⁒e s⁒r⁒c subscript superscript 𝑓 𝑠 π‘Ÿ 𝑐 𝑑 𝑒 𝑛 𝑠 𝑒 f^{src}_{dense}italic_f start_POSTSUPERSCRIPT italic_s italic_r italic_c end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_d italic_e italic_n italic_s italic_e end_POSTSUBSCRIPT and make it trainable during adaptation. Again, during inference we directly use f d⁒e⁒n⁒s⁒e s⁒r⁒c subscript superscript 𝑓 𝑠 π‘Ÿ 𝑐 𝑑 𝑒 𝑛 𝑠 𝑒 f^{src}_{dense}italic_f start_POSTSUPERSCRIPT italic_s italic_r italic_c end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_d italic_e italic_n italic_s italic_e end_POSTSUBSCRIPT.
191
+
192
+ Figure[5](https://arxiv.org/html/2506.01935v1#S4.F5 "Figure 5 β€£ 4.3 Ablation Study β€£ 4 Experiments β€£ Low-Rank Head Avatar Personalization with Registers") shows visual results from these variants. Adding Gaussian Noise causes washed out colors and overly smoothed details. Adding Learnable Embeddings improves the preservation of details and colors slightly. This variant would be the immediate extension of registers from[darcet2023vision](https://arxiv.org/html/2506.01935v1#bib.bib11) to our case. However, we notice that our proposed Register Module best preserves high-frequency details, such as roughness of the face and fine wrinkles, by learning an appropriate 3D feature space for human faces. Table[1](https://arxiv.org/html/2506.01935v1#S4.T1 "Table 1 β€£ 4.3 Ablation Study β€£ 4 Experiments β€£ Low-Rank Head Avatar Personalization with Registers") shows the corresponding quantitative results, demonstrating the efficacy of our module in enhancing identity-specific details. We see that our method has better perceptual similarity to the input image and better preserves identity. We encourage the readers to watch our supplementary video for additional results demonstrating the efficacy of our Register Module.
193
+
194
+ ### 4.4 Evaluation
195
+
196
+ Baselines. We compare our method to a baseline as the generic avatar generation model ([chu2024gagavatar,](https://arxiv.org/html/2506.01935v1#bib.bib9)) and the state-of-the-art approaches, namely LoRA([saunders2024talklora,](https://arxiv.org/html/2506.01935v1#bib.bib46)) and MetaPortrait([zhang2023metaportrait,](https://arxiv.org/html/2506.01935v1#bib.bib60)) for adaptation in the cross-reconstruction setting. We use the same rank r=32 π‘Ÿ 32 r=32 italic_r = 32 for all our comparisons. Note that we implement the meta-learning algorithm from MetaPortrait([zhang2023metaportrait,](https://arxiv.org/html/2506.01935v1#bib.bib60)) on LoRA weights for fair comparisons.
197
+
198
+ Evaluation Metrics. To measure visual quality, we select challenging patches with high-frequency details from predicted frames and compare them against source image patches using Learned Perceptual Image Patch Similarity (LPIPS)([lpips,](https://arxiv.org/html/2506.01935v1#bib.bib62)) in the cross-reconstruction setting. Furthermore, we estimate the identity preservation using the Average Content Distance (ACD) metric[sda](https://arxiv.org/html/2506.01935v1#bib.bib51), by calculating the cosine distance between ArcFace([deng2019arcface,](https://arxiv.org/html/2506.01935v1#bib.bib12)) face recognition embeddings of synthesized and source images. Essentially, the idea is that the smaller the distance between those embeddings, the closer are the synthesized images to the input source images in terms of identity.
199
+
200
+ Table 2: Quantitative comparisons of our approach with the baseline and other state-of-the-art adaptation methods. Results are highlighted as follows: Best and Second Best.
201
+
202
+ Quantitative Evaluation. Table[2](https://arxiv.org/html/2506.01935v1#S4.T2 "Table 2 β€£ 4.4 Evaluation β€£ 4 Experiments β€£ Low-Rank Head Avatar Personalization with Registers") shows our quantitative results. Our method significantly outperforms the state-of-the-art in low-rank adaptation (LoRA and Meta Learning on LoRA) in terms of visual quality (LPIPS) and identity preservation (ACD). We encourage the readers to watch our supplementary video for additional results demonstrating the efficacy of our Register Module. Qualitative Evaluation.
203
+
204
+ ![Image 6: Refer to caption](https://arxiv.org/html/2506.01935v1/extracted/6493591/imgs/main_figure_5x6_labelled_zoomed_rearranged_highlighted_highres.png)
205
+
206
+ Figure 6: Personalized head avatar generation on VFHQ Test (row 1, 2, and 3) and RareFace-50 (rows 4 and 5). We compare state-of-the-art methods for adaptation (LoRA ([hu2022lora,](https://arxiv.org/html/2506.01935v1#bib.bib26)) and Meta-Learning ([zhang2023metaportrait,](https://arxiv.org/html/2506.01935v1#bib.bib60))). We observe that our method preserves fine details for identity-specific features and produces higher quality results as compared to other methods.
207
+
208
+ Figure[6](https://arxiv.org/html/2506.01935v1#S4.F6 "Figure 6 β€£ 4.4 Evaluation β€£ 4 Experiments β€£ Low-Rank Head Avatar Personalization with Registers") shows our qualitative results. Notice how our method faithfully reconstructs fine details and identity-specific features well as compared to other methods. GAGAvatar frequently produces washed out colors (in rows 4 and 5) and muted details (rows 1, 2, and 3). While LoRA performs better than GAGAvatar, it still misses fine wrinkles and veins (rows 1, 2, and 3), contrast in tattoos and skin (rows 4 and 5), and bumpy skin (row 3). Meta Learning on LoRA weights produces artifacts on face (row 3) and in eyes (rows 1, 2, and 5), produces wrong expression as compared to the driving image (all rows), misses tattoos on skin (row 4), and generates wrong colored lips. In general, our method learns to preserve high-frequency details in the source identity and produce higher quality results while using the same number of parameters as other methods.
209
+
210
+ 5 Conclusion
211
+ ------------
212
+
213
+ In conclusion, we introduce a novel method for personalized head avatar generation. State-of-the-art approaches for adaptation such as vanilla LoRA and meta-learning fail to preserve high-frequency details and identity-specific features. We propose a novel Register Module that enhances the performance of LoRA, by teaching the layers to attend to specific regions in the intermediate features of a pre-trained model. To demonstrate the effectiveness of our method, we collect a dataset of talking individuals with distinctive facial features, such as wrinkles and tattoos. Our method outperforms existing methods qualitatively and quantitatively, faithfully capturing unseen identities.
214
+
215
+ Limitations and Future Work. Although our Register Module successfully captures distinctive facial details, it might produce suboptimal results for extreme side or back views that are rarely or not at all seen in a video. In the future, we plan to extend our work to faithfully animate avatars from such rare views of individuals.
216
+
217
+ References
218
+ ----------
219
+
220
+ * [1] Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. Beit: Bert pre-training of image transformers. arXiv preprint arXiv:2106.08254, 2021.
221
+ * [2] Volker Blanz and Thomas Vetter. A morphable model for the synthesis of 3d faces. In Proceedings of the 26th annual conference on Computer graphics and interactive techniques, pages 187–194, 1999.
222
+ * [3] Zhixi Cai, Shreya Ghosh, Kalin Stefanov, Abhinav Dhall, Jianfei Cai, Hamid Rezatofighi, Reza Haffari, and Munawar Hayat. Marlin: Masked autoencoder for facial video representation learning. In CVPR, 2023.
223
+ * [4] Sai Tanmay Reddy Chakkera, Aggelina Chatziagapi, and Dimitris Samaras. Jean: Joint expression and audio-guided nerf-based talking face generation. arXiv preprint arXiv:2409.12156, 2024.
224
+ * [5] Eric R Chan, Connor Z Lin, Matthew A Chan, Koki Nagano, Boxiao Pan, Shalini De Mello, Orazio Gallo, Leonidas J Guibas, Jonathan Tremblay, Sameh Khamis, et al. Efficient geometry-aware 3d generative adversarial networks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 16123–16133, 2022.
225
+ * [6] Jonathan Chang and James Kelly. minlora: a minimal pytorch library that allows you to apply lora to any pytorch model. [https://github.com/cccntu/minLoRA](https://github.com/cccntu/minLoRA). Accessed: 2025-05-17.
226
+ * [7] Aggelina Chatziagapi, Grigorios G Chrysos, and Dimitris Samaras. Mi-nerf: Learning a single face nerf from multiple identities. arXiv preprint arXiv:2403.19920, 2024.
227
+ * [8] Kyusun Cho, Joungbin Lee, Heeji Yoon, Yeobin Hong, Jaehoon Ko, Sangjun Ahn, and Seungryong Kim. Gaussiantalker: Real-time talking head synthesis with 3d gaussian splatting. In Proceedings of the 32nd ACM International Conference on Multimedia, pages 10985–10994, 2024.
228
+ * [9] Xuangeng Chu and Tatsuya Harada. Generalizable and animatable gaussian head avatar. In The Thirty-eighth Annual Conference on Neural Information Processing Systems, 2024.
229
+ * [10] Xuangeng Chu, Yu Li, Ailing Zeng, Tianyu Yang, Lijian Lin, Yunfei Liu, and Tatsuya Harada. Gpavatar: Generalizable and precise head avatar from image (s). arXiv preprint arXiv:2401.10215, 2024.
230
+ * [11] TimothΓ©e Darcet, Maxime Oquab, Julien Mairal, and Piotr Bojanowski. Vision transformers need registers. arXiv preprint arXiv:2309.16588, 2023.
231
+ * [12] Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. Arcface: Additive angular margin loss for deep face recognition. In CVPR, 2019.
232
+ * [13] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Jill Burstein, Christy Doran, and Thamar Solorio, editors, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota, June 2019. Association for Computational Linguistics.
233
+ * [14] Helisa Dhamo, Yinyu Nie, Arthur Moreau, Jifei Song, Richard Shaw, Yiren Zhou, and Eduardo PΓ©rez-Pellitero. Headgas: Real-time animatable head avatars via 3d gaussian splatting. In European Conference on Computer Vision, pages 459–476. Springer, 2024.
234
+ * [15] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is worth 16x16 words: Transformers for image recognition at scale. In International Conference on Learning Representations, 2021.
235
+ * [16] Hao-Bin Duan, Miao Wang, Jin-Chuan Shi, Xu-Chuan Chen, and Yan-Pei Cao. Bakedavatar: Baking neural fields for real-time head avatar synthesis. ACM Trans. Graph., 42(6), sep 2023.
236
+ * [17] H.Edelsbrunner, D.Kirkpatrick, and R.Seidel. On the shape of a set of points in the plane. IEEE Transactions on Information Theory, 29(4):551–559, 1983.
237
+ * [18] Guy Gafni, Justus Thies, Michael Zollhâfer, and Matthias Nießner. Dynamic neural radiance fields for monocular 4d facial avatar reconstruction, 2020.
238
+ * [19] Chen Gao, Yichang Shih, Wei-Sheng Lai, Chia-Kai Liang, and Jia-Bin Huang. Portrait neural radiance fields from a single image. arXiv preprint arXiv:2012.05903, 2020.
239
+ * [20] Pablo Garrido, Levi Valgaerts, Ole Rehmsen, Thorsten Thormahlen, Patrick Perez, and Christian Theobalt. Automatic face reenactment. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4217–4224, 2014.
240
+ * [21] Pablo Garrido, Levi Valgaerts, Hamid Sarmadi, Ingmar Steiner, Kiran Varanasi, Patrick Perez, and Christian Theobalt. Vdub: Modifying face video of actors for plausible visual alignment to a dubbed audio track. In Computer graphics forum, volume 34, pages 193–204. Wiley Online Library, 2015.
241
+ * [22] Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedforward neural networks. In Yee Whye Teh and Mike Titterington, editors, Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, volume 9 of Proceedings of Machine Learning Research, pages 249–256, Chia Laguna Resort, Sardinia, Italy, 13–15 May 2010. PMLR.
242
+ * [23] Jia Guo, Jiankang Deng, Xiang An, Jack Yu, and Baris Gecer. Insightface repository model zoo.
243
+ * [24] Sepp Hochreiter and JΓΌrgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
244
+ * [25] Neil Houlsby, Andrei Giurgiu, Stanislaw Jastrzebski, Bruna Morrone, Quentin De Laroussilhe, Andrea Gesmundo, Mona Attariyan, and Sylvain Gelly. Parameter-efficient transfer learning for nlp. In International conference on machine learning, pages 2790–2799. PMLR, 2019.
245
+ * [26] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. LoRA: Low-rank adaptation of large language models. In International Conference on Learning Representations, 2022.
246
+ * [27] Bernhard Kerbl, Georgios Kopanas, Thomas LeimkΓΌhler, and George Drettakis. 3d gaussian splatting for real-time radiance field rendering. ACM Transactions on Graphics, 42(4), 2023.
247
+ * [28] Hyeongwoo Kim, Pablo Garrido, Ayush Tewari, Weipeng Xu, Justus Thies, Matthias Niessner, Patrick PΓ©rez, Christian Richardt, Michael ZollhΓΆfer, and Christian Theobalt. Deep video portraits. ACM Transactions on Graphics (TOG), 37(4):1–14, 2018.
248
+ * [29] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization, 2017.
249
+ * [30] Tobias Kirschstein, Simon Giebenhain, and Matthias Nießner. Diffusionavatars: Deferred diffusion for high-fidelity 3d head avatars. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 5481–5492, 2024.
250
+ * [31] Tobias Kirschstein, Simon Giebenhain, Jiapeng Tang, Markos Georgopoulos, and Matthias Nießner. Gghead: Fast and generalizable 3d gaussian heads. In SIGGRAPH Asia 2024 Conference Papers, pages 1–11, 2024.
251
+ * [32] Jiahe Li, Jiawei Zhang, Xiao Bai, Jin Zheng, Jun Zhou, and Lin Gu. Instag: Learning personalized 3d talking head from few-second video. arXiv preprint arXiv:2502.20387, 2025.
252
+ * [33] Tianye Li, Timo Bolkart, Michael.J. Black, Hao Li, and Javier Romero. Learning a model of facial shape and expression from 4D scans. ACM Transactions on Graphics, (Proc. SIGGRAPH Asia), 36(6):194:1–194:17, 2017.
253
+ * [34] Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng. Nerf: Representing scenes as neural radiance fields for view synthesis. In ECCV, 2020.
254
+ * [35] Alex Nichol and John Schulman. Reptile: a scalable metalearning algorithm. arXiv preprint arXiv:1803.02999, 2(3):4, 2018.
255
+ * [36] Yotam Nitzan, Kfir Aberman, Qiurui He, Orly Liba, Michal Yarom, Yossi Gandelsman, Inbar Mosseri, Yael Pritch, and Daniel Cohen-Or. Mystyle: A personalized generative prior. arXiv preprint arXiv:2203.17272, 2022.
256
+ * [37] Maxime Oquab, TimothΓ©e Darcet, ThΓ©o Moutakanni, Huy Vo, Marc Szafraniec, Vasil Khalidov, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, et al. Dinov2: Learning robust visual features without supervision. arXiv preprint arXiv:2304.07193, 2023.
257
+ * [38] Keunhong Park, Utkarsh Sinha, Jonathan T. Barron, Sofien Bouaziz, Dan B Goldman, Steven M. Seitz, and Ricardo Martin-Brualla. Nerfies: Deformable neural radiance fields. ICCV, 2021.
258
+ * [39] Keunhong Park, Utkarsh Sinha, Peter Hedman, Jonathan T Barron, Sofien Bouaziz, Dan B Goldman, Ricardo Martin-Brualla, and Steven M Seitz. Hypernerf: A higher-dimensional representation for topologically varying neural radiance fields. arXiv preprint arXiv:2106.13228, 2021.
259
+ * [40] K R Prajwal, Rudrabha Mukhopadhyay, Vinay P. Namboodiri, and C.V. Jawahar. A lip sync expert is all you need for speech to lip generation in the wild. In Proceedings of the 28th ACM International Conference on Multimedia, page 484–492, 2020.
260
+ * [41] Albert Pumarola, Antonio Agudo, Aleix M Martinez, Alberto Sanfeliu, and Francesc Moreno-Noguer. Ganimation: One-shot anatomically consistent facial animation. International Journal of Computer Vision, 128(3):698–713, 2020.
261
+ * [42] Albert Pumarola, Enric Corona, Gerard Pons-Moll, and Francesc Moreno-Noguer. D-nerf: Neural radiance fields for dynamic scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 10318–10327, 2021.
262
+ * [43] Luchao Qi, Jiaye Wu, Annie N Wang, Shengze Wang, and Roni Sengupta. My3dgen: A scalable personalized 3d generative model. In 2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), pages 961–972. IEEE, 2025.
263
+ * [44] Shenhan Qian, Tobias Kirschstein, Liam Schoneveld, Davide Davoli, Simon Giebenhain, and Matthias Nießner. Gaussianavatars: Photorealistic head avatars with rigged 3d gaussians. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 20299–20309, 2024.
264
+ * [45] Tal Reiss, Bar Cavia, and Yedid Hoshen. Detecting deepfakes without seeing any. arXiv preprint arXiv:2311.01458, 2023.
265
+ * [46] Jack Saunders and Vinay Namboodiri. Talklora: Low-rank adaptation for speech-driven animation. arXiv preprint arXiv:2408.13714, 2024.
266
+ * [47] Aliaksandr Siarohin, Stéphane Lathuilière, Sergey Tulyakov, Elisa Ricci, and Nicu Sebe. First order motion model for image animation. In Conference on Neural Information Processing Systems (NeurIPS), December 2019.
267
+ * [48] Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, et al. End-to-end memory networks. Advances in neural information processing systems, 28, 2015.
268
+ * [49] Justus Thies, Michael Zollhofer, Marc Stamminger, Christian Theobalt, and Matthias Nießner. Face2face: Real-time face capture and reenactment of rgb videos. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2387–2395, 2016.
269
+ * [50] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
270
+ * [51] Konstantinos Vougioukas, Stavros Petridis, and Maja Pantic. End-to-end speech-driven realistic facial animation with temporal gans. In CVPRW, 2019.
271
+ * [52] Konstantinos Vougioukas, Stavros Petridis, and Maja Pantic. Realistic speech-driven facial animation with gans. International Journal of Computer Vision, 128(5):1398–1413, 2020.
272
+ * [53] Jie Wang, Jiu-Cheng Xie, Xianyan Li, Feng Xu, Chi-Man Pun, and Hao Gao. Gaussianhead: High-fidelity head avatars with learnable gaussian derivation. IEEE Transactions on Visualization and Computer Graphics, 2025.
273
+ * [54] Jason Weston, Sumit Chopra, and Antoine Bordes. Memory networks. arXiv preprint arXiv:1410.3916, 2014.
274
+ * [55] Liangbin Xie, Xintao Wang, Honglun Zhang, Chao Dong, and Ying Shan. Vfhq: A high-quality dataset and benchmark for video face super-resolution. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 657–666, 2022.
275
+ * [56] Jingyi Xu, Hieu Le, Zhixin Shu, Yang Wang, Yi-Hsuan Tsai, and Dimitris Samaras. Learning frame-wise emotion intensity for audio-driven talking-head generation. arXiv preprint arXiv:2409.19501, 2024.
276
+ * [57] Sicheng Xu, Guojun Chen, Yu-Xiao Guo, Jiaolong Yang, Chong Li, Zhenyu Zang, Yizhong Zhang, Xin Tong, and Baining Guo. Vasa-1: Lifelike audio-driven talking faces generated in real time. arXiv preprint arXiv:2404.10667, 2024.
277
+ * [58] Yuelang Xu, Benwang Chen, Zhe Li, Hongwen Zhang, Lizhen Wang, Zerong Zheng, and Yebin Liu. Gaussian head avatar: Ultra high-fidelity head avatar via dynamic gaussians. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024.
278
+ * [59] Zhixuan Yu, Ziqian Bai, Abhimitra Meka, Feitong Tan, Qiangeng Xu, Rohit Pandey, Sean Fanello, Hyun Soo Park, and Yinda Zhang. One2avatar: Generative implicit head avatar for few-shot user adaptation. arXiv preprint arXiv:2402.11909, 2024.
279
+ * [60] Bowen Zhang, Chenyang Qi, Pan Zhang, Bo Zhang, HsiangTao Wu, Dong Chen, Qifeng Chen, Yong Wang, and Fang Wen. Metaportrait: Identity-preserving talking head generation with fast personalized adaptation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 22096–22105, 2023.
280
+ * [61] Lvmin Zhang, Anyi Rao, and Maneesh Agrawala. Adding conditional control to text-to-image diffusion models. In Proceedings of the IEEE/CVF international conference on computer vision, pages 3836–3847, 2023.
281
+ * [62] Richard Zhang, Phillip Isola, Alexei A Efros, Eli Shechtman, and Oliver Wang. The unreasonable effectiveness of deep features as a perceptual metric. In CVPR, 2018.
282
+ * [63] Hang Zhou, Yasheng Sun, Wayne Wu, Chen Change Loy, Xiaogang Wang, and Ziwei Liu. Pose-controllable talking face generation by implicitly modularized audio-visual representation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021.
283
+ * [64] Zheng Zhu, Guan Huang, Jiankang Deng, Yun Ye, Junjie Huang, Xinze Chen, Jiagang Zhu, Tian Yang, Jiwen Lu, Dalong Du, and Jie Zhou. Webface260m: A benchmark unveiling the power of million-scale deep face recognition. In CVPR, 2021.
284
+ * [65] Wojciech Zielonka, Timo Bolkart, and Justus Thies. Instant volumetric head avatars. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 4574–4584, 2023.
285
+
286
+ Appendix
287
+ --------
288
+
289
+ The appendix is organized as follows:
290
+
291
+ 1. 1.Additional Ablation Study in Sec. [A](https://arxiv.org/html/2506.01935v1#A1 "Appendix A Additional Ablation Study β€£ Low-Rank Head Avatar Personalization with Registers")
292
+ 2. 2.Implementation Details in Sec. [B](https://arxiv.org/html/2506.01935v1#A2 "Appendix B Implementation Details β€£ Low-Rank Head Avatar Personalization with Registers")
293
+ 3. 3.Dataset Collection Details in Sec. [C](https://arxiv.org/html/2506.01935v1#A3 "Appendix C Data Collection β€£ Low-Rank Head Avatar Personalization with Registers")
294
+ 4. 4.Additional Results in Sec. [D](https://arxiv.org/html/2506.01935v1#A4 "Appendix D Additional Results β€£ Low-Rank Head Avatar Personalization with Registers")
295
+ 5. 5.User Study Details in Sec. [E](https://arxiv.org/html/2506.01935v1#A5 "Appendix E User Study Details β€£ Low-Rank Head Avatar Personalization with Registers")
296
+ 6. 6.Discussion: Broader Impact, Limitations, and Ethical Considerations in Sec. [F](https://arxiv.org/html/2506.01935v1#A6 "Appendix F Discussions β€£ Low-Rank Head Avatar Personalization with Registers")
297
+
298
+ We strongly encourage the readers to watch our supplementary video.
299
+
300
+ Appendix A Additional Ablation Study
301
+ ------------------------------------
302
+
303
+ Table 3: Ablation study on losses to adapt with our method. In (a), we remove L f⁒e⁒a⁒t subscript 𝐿 𝑓 𝑒 π‘Ž 𝑑 L_{feat}italic_L start_POSTSUBSCRIPT italic_f italic_e italic_a italic_t end_POSTSUBSCRIPT during adaptation. In (b), we remove L r⁒e⁒g subscript 𝐿 π‘Ÿ 𝑒 𝑔 L_{reg}italic_L start_POSTSUBSCRIPT italic_r italic_e italic_g end_POSTSUBSCRIPT during adaptation.
304
+
305
+ Table 4: Ablation study on length of videos to adapt the head avatar. We set the adaptation video length to 4, 2, and 1 seconds.
306
+
307
+ We conduct additional ablation studies on our proposed method. Specifically, we ablate the various losses we propose. In Table[3](https://arxiv.org/html/2506.01935v1#A1.T3 "Table 3 β€£ Appendix A Additional Ablation Study β€£ Low-Rank Head Avatar Personalization with Registers")(a), we remove the loss L f⁒e⁒a⁒t subscript 𝐿 𝑓 𝑒 π‘Ž 𝑑 L_{feat}italic_L start_POSTSUBSCRIPT italic_f italic_e italic_a italic_t end_POSTSUBSCRIPT to supervise the output of our register module during adaptation, and in (b) we remove L r⁒e⁒g subscript 𝐿 π‘Ÿ 𝑒 𝑔 L_{reg}italic_L start_POSTSUBSCRIPT italic_r italic_e italic_g end_POSTSUBSCRIPT to make the learned embeddings in our Register Module different from each other. We see that removal of these losses causes a drop in performance as compared to when both losses are present. Furthermore, we ablate on the length of the videos used to adapt our head avatars in Table[4](https://arxiv.org/html/2506.01935v1#A1.T4 "Table 4 β€£ Appendix A Additional Ablation Study β€£ Low-Rank Head Avatar Personalization with Registers"). We see that reducing the length of the adaptation video causes a drop in the performance of the method. Note that we trim the original videos to the first 4, 2, and 1 second for these experiments.
308
+
309
+ Appendix B Implementation Details
310
+ ---------------------------------
311
+
312
+ ### B.1 Addition of LoRA to layers
313
+
314
+ We use minLoRA([minLoRA,](https://arxiv.org/html/2506.01935v1#bib.bib6)) library to add LoRA([hu2022lora,](https://arxiv.org/html/2506.01935v1#bib.bib26)) parameters to all layers of a pretrained pytorch model. During training LoRA is instantialized as separate parameters Bβˆˆβ„ mΓ—r,Aβˆˆβ„ rΓ—n formulae-sequence 𝐡 superscript ℝ π‘š π‘Ÿ 𝐴 superscript ℝ π‘Ÿ 𝑛 B\in\mathbb{R}^{m\times r},A\in\mathbb{R}^{r\times n}italic_B ∈ blackboard_R start_POSTSUPERSCRIPT italic_m Γ— italic_r end_POSTSUPERSCRIPT , italic_A ∈ blackboard_R start_POSTSUPERSCRIPT italic_r Γ— italic_n end_POSTSUPERSCRIPT and rβ‰ͺm⁒i⁒n⁒(m,n)much-less-than π‘Ÿ π‘š 𝑖 𝑛 π‘š 𝑛 r\ll min(m,n)italic_r β‰ͺ italic_m italic_i italic_n ( italic_m , italic_n ) from the pretrained parameters W p⁒r⁒eβˆˆβ„ mΓ—n subscript π‘Š 𝑝 π‘Ÿ 𝑒 superscript ℝ π‘š 𝑛 W_{pre}\in\mathbb{R}^{m\times n}italic_W start_POSTSUBSCRIPT italic_p italic_r italic_e end_POSTSUBSCRIPT ∈ blackboard_R start_POSTSUPERSCRIPT italic_m Γ— italic_n end_POSTSUPERSCRIPT of the module, so that these parameters can be trained. During inference the LoRA parameters are merged with the pretrained parameters and assigned as the new pretrained parameters according to:
315
+
316
+ W a⁒d⁒a⁒p⁒t:=W p⁒r⁒e+Δ⁒W=W p⁒r⁒e+B⁒A.assign subscript π‘Š π‘Ž 𝑑 π‘Ž 𝑝 𝑑 subscript π‘Š 𝑝 π‘Ÿ 𝑒 Ξ” π‘Š subscript π‘Š 𝑝 π‘Ÿ 𝑒 𝐡 𝐴 W_{adapt}:=W_{pre}+\Delta W=W_{pre}+BA.italic_W start_POSTSUBSCRIPT italic_a italic_d italic_a italic_p italic_t end_POSTSUBSCRIPT := italic_W start_POSTSUBSCRIPT italic_p italic_r italic_e end_POSTSUBSCRIPT + roman_Ξ” italic_W = italic_W start_POSTSUBSCRIPT italic_p italic_r italic_e end_POSTSUBSCRIPT + italic_B italic_A .(10)
317
+
318
+ This ensures that the LoRA layers add no overhead to the pipeline during the inference. Given a pretrained GAGAvatar model, we add LoRA weights to all layers except in the DINOv2 feature extractor.
319
+
320
+ ### B.2 Dataset Preprocessing
321
+
322
+ Given the expression code, shape code, and camera pose predicted during 3DMM fitting, we predict a 3DMM mesh. We compute visible vertices of 3DMM mesh from the computed camera pose of a particular frame using trimesh’s RayMeshIntersector implementation. Specifically, we cast rays from the camera origin to each point in the mesh and compute whether any line intersects the mesh (if an intersection exists, the point is not visible). Given these visible points in 3D space, we find a screen space camera projection on a screen of the same size as DINOv2 feature space (H=W=296 𝐻 π‘Š 296 H=W=296 italic_H = italic_W = 296) using Perspective Projection. Then we find an alpha-shape of these projected points with Ξ±=0.065 𝛼 0.065\alpha=0.065 italic_Ξ± = 0.065. Next, we compute all the points in the alpha-shape polygon using a parallelized point-in-polygon test. For all points in the polygon that are not projected points, we also compute the k π‘˜ k italic_k nearest projected points and distances from those projected points, where k=11 π‘˜ 11 k=11 italic_k = 11.
323
+
324
+ ### B.3 Register Module Details
325
+
326
+ The embeddings rigged to vertices on the 3DMM mesh([FLAME:SiggraphAsia2017,](https://arxiv.org/html/2506.01935v1#bib.bib33)) are modeled in a 3D space in our register module, However, these points are projected onto a 2D space using a camera projection following which, we interpolate the features using a weighted sum of the k π‘˜ k italic_k nearest neighbors to fill up the face region in densely constructed feature f S subscript 𝑓 𝑆 f_{S}italic_f start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT. Using the entire set of the vertices would cause all points to be projected onto the densely constructed feature f S subscript 𝑓 𝑆 f_{S}italic_f start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT, thus impacting the interpolation process (projection of points from the back of the head might be in the k π‘˜ k italic_k nearest neighbors of a point p∈i⁒n⁒t⁒e⁒r⁒i⁒o⁒r⁒(P U S)𝑝 𝑖 𝑛 𝑑 𝑒 π‘Ÿ 𝑖 π‘œ π‘Ÿ subscript 𝑃 subscript π‘ˆ 𝑆 p\in interior(P_{U_{S}})italic_p ∈ italic_i italic_n italic_t italic_e italic_r italic_i italic_o italic_r ( italic_P start_POSTSUBSCRIPT italic_U start_POSTSUBSCRIPT italic_S end_POSTSUBSCRIPT end_POSTSUBSCRIPT )). Thus, we only project visible points from any given view in our register module.
327
+
328
+ We model E p⁒r⁒o⁒cβˆ’1 subscript 𝐸 𝑝 π‘Ÿ π‘œ 𝑐 1 E_{proc-1}italic_E start_POSTSUBSCRIPT italic_p italic_r italic_o italic_c - 1 end_POSTSUBSCRIPT as a convolutional module, with 4 convolutional layers of channel sizes [512,512,256,256]512 512 256 256[512,512,256,256][ 512 , 512 , 256 , 256 ] and kernel sizes set to 3 3 3 3 for each layer. E p⁒r⁒o⁒cβˆ’2 subscript 𝐸 𝑝 π‘Ÿ π‘œ 𝑐 2 E_{proc-2}italic_E start_POSTSUBSCRIPT italic_p italic_r italic_o italic_c - 2 end_POSTSUBSCRIPT is also a convolutional module with 4 layers of channel sizes set to 256. The first 3 layers have a kernel size of 3, while the last layer has a kernel size of 1.
329
+
330
+ ### B.4 Training Details
331
+
332
+ #### B.4.1 Our Method
333
+
334
+ We initialize embeddings e 𝑒 e italic_e using Xavier Normal initialization([pmlr-v9-glorot10a,](https://arxiv.org/html/2506.01935v1#bib.bib22)). We adapt head avatars with our method for a total of 1000 1000 1000 1000 iterations. The batch size is set to 2 2 2 2. We use Adam([kingma2017adammethodstochasticoptimization,](https://arxiv.org/html/2506.01935v1#bib.bib29)) optimizer with learning rate set to 1⁒eβˆ’4 1 𝑒 4 1e-4 1 italic_e - 4 for the LoRA layers whereas the learning rate is set to 1⁒eοΏ½οΏ½3 1 𝑒 3 1e-3 1 italic_e - 3 for parameters in the Register Module. We use a linear learning rate scheduler with a start factor of 1.0 and an end factor of 0.1 at the 1000 1000 1000 1000 th iteration. Along with our proposed losses, we also keep the losses proposed by GAGAvatar[chu2024gagavatar](https://arxiv.org/html/2506.01935v1#bib.bib9), namely, RGB losses between predicted image and driving image, a perceptual loss between the predicted images and driver image, and L l⁒i⁒f⁒t⁒i⁒n⁒g subscript 𝐿 𝑙 𝑖 𝑓 𝑑 𝑖 𝑛 𝑔 L_{lifting}italic_L start_POSTSUBSCRIPT italic_l italic_i italic_f italic_t italic_i italic_n italic_g end_POSTSUBSCRIPT, loss between predicted points from reconstruction branch and vertices of the 3DMM mesh fitted on the driving image. Our adaptation takes β‰ˆ35 absent 35\approx 35β‰ˆ 35 minutes on an RTX A5000 GPU, consuming β‰ˆ23 absent 23\approx 23β‰ˆ 23 GB of VRAM. During inference, we load and merge the LoRA weights into their corresponding layer parameters. Thus, there is no overhead during inference, i.e., it consumes the same amount of resources as GAGAvatar during inference.
335
+
336
+ #### B.4.2 Baselines and Ablations
337
+
338
+ For all comparisons with the vanilla LoRA, we use the same hyperparameters as our method. That is, we adapt with vanilla LoRA for a total of 1000 1000 1000 1000 iterations, set the batch size to 2 2 2 2, use Adam optimizer with learning rate set to 1⁒eβˆ’4 1 𝑒 4 1e-4 1 italic_e - 4 for the LoRA layers, and use a linear learning rate scheduler with a start factor of 1.0 and an end factor of 0.1 at the 1000 1000 1000 1000 th iteration. This adaptation takes β‰ˆ25 absent 25\approx 25β‰ˆ 25 minutes on an RTX A5000 GPU, consuming β‰ˆ14.9 absent 14.9\approx 14.9β‰ˆ 14.9 GB of VRAM.
339
+
340
+ Following MetaPortrait([zhang2023metaportrait,](https://arxiv.org/html/2506.01935v1#bib.bib60)), we implement Reptile[nichol2018reptile](https://arxiv.org/html/2506.01935v1#bib.bib35), a MAML based strategy for meta-learning in our low-rank adaptation task. We perform pre-training on our RareFace-50 dataset. Following the formulation of MetaPortrait, we formulate the task of adapting to a particular identity as an inner loop task. Thus, we adapt to a randomly sampled identity at each outer step. For all comparisons with Meta-Learning on LoRA, we set rank r=32 π‘Ÿ 32 r=32 italic_r = 32, the inner loop learning rate to 2⁒eβˆ’4 2 𝑒 4 2e-4 2 italic_e - 4, and the outer update step size to 2⁒eβˆ’5 2 𝑒 5 2e-5 2 italic_e - 5. The number of inner loop steps are set to 120, and number of elements in batch in inner loop is set to 4. We set the number of outer iterations to 4800. Given resource constraints, we implement a single GPU version of Reptile([nichol2018reptile,](https://arxiv.org/html/2506.01935v1#bib.bib35)), thus taking 12 days to complete the pretraining task on a Quadro RTX 8000 GPU, consuming β‰ˆ45 absent 45\approx 45β‰ˆ 45 GB of VRAM. After the pre-training task, we adapt the model on an identity for 120 steps with the same learning rate as the inner loop, which takes β‰ˆ4 absent 4\approx 4β‰ˆ 4 minutes on an RTX A5000 GPU consuming β‰ˆ14.9 absent 14.9\approx 14.9β‰ˆ 14.9 GB of VRAM. We then use these adapted weights for inference by merging these LoRA weights to the corresponding layers.
341
+
342
+ For all experiments with learnable parameters e l⁒e⁒a⁒r⁒n subscript 𝑒 𝑙 𝑒 π‘Ž π‘Ÿ 𝑛 e_{learn}italic_e start_POSTSUBSCRIPT italic_l italic_e italic_a italic_r italic_n end_POSTSUBSCRIPT as a replacement to our Register Module, we set the learning rate for e l⁒e⁒a⁒r⁒n subscript 𝑒 𝑙 𝑒 π‘Ž π‘Ÿ 𝑛 e_{learn}italic_e start_POSTSUBSCRIPT italic_l italic_e italic_a italic_r italic_n end_POSTSUBSCRIPT parameters to be 1⁒eβˆ’3 1 𝑒 3 1e-3 1 italic_e - 3.
343
+
344
+ ### B.5 Metrics
345
+
346
+ We compute the visual quality metrics namely, LPIPS[lpips](https://arxiv.org/html/2506.01935v1#bib.bib62), on specific challenging crops with high frequency details from predicted frames and compare them against source image patches. The identity preservation metric (ACD) is measured using ArcFace[deng2019arcface](https://arxiv.org/html/2506.01935v1#bib.bib12), a ResNet50-based network trained on WebFace[zhu2021webface260m](https://arxiv.org/html/2506.01935v1#bib.bib64). Specifically, we used β€œbuffalo_l” model from the insightface repository[insightface](https://arxiv.org/html/2506.01935v1#bib.bib23).
347
+
348
+ Appendix C Data Collection
349
+ --------------------------
350
+
351
+ We collect data from Youtube of people knowingly appearing in interviews in public broadcasts with distinctive facial details, such as wrinkles or tattoos. These characteristics are under-represented in existing datasets. We will present the dataset as a set of links, along with trim times and crop position coordinates. Additionally, this dataset will be maintained using an automatic script that checks and removes links from the list that no longer exist in YouTube.
352
+
353
+ Appendix D Additional Results
354
+ -----------------------------
355
+
356
+ ### D.1 Details of Adaptation
357
+
358
+ Adaptation Duration. In this section, we discuss the adaptation durations of our baselines and our method. Vanilla LoRA and our method take β‰ˆ25 absent 25\approx 25β‰ˆ 25 minutes and β‰ˆ35 absent 35\approx 35β‰ˆ 35 minutes to adapt respectively on an RTX A5000 GPU. Whereas, meta-learning on LoRA requires a much longer 12 12 12 12 day period on an RTX Quadro 8000 GPU for the pretraining objective, after which it requires β‰ˆ4 absent 4\approx 4β‰ˆ 4 minutes on a RTX A5000 GPU. However, during inference, all of these methods have the same inference times as GAGAvatar[chu2024gagavatar](https://arxiv.org/html/2506.01935v1#bib.bib9).
359
+
360
+ Adaptation Parameters. In this section, we discuss the number of parameters during adaptation and inference of our baselines and our method. During adaptation, we introduce 4.7 4.7 4.7 4.7 M parameters as LoRA weights to the pretrained layers in all baselines. Our Register Module adds another 18.5 18.5 18.5 18.5 M parameters. Thus, our method has 23.2 23.2 23.2 23.2 M parameters during adaptation, which is β‰ˆ11%absent percent 11\approx 11\%β‰ˆ 11 % of the total number of parameters (199 199 199 199 M parameters) in GAGAvatar. During adaptation, we discard the trained Register Module, which lends to the same efficiency as GAGAvatar and other baselines.
361
+
362
+ ### D.2 User Study
363
+
364
+ ![Image 7: Refer to caption](https://arxiv.org/html/2506.01935v1/extracted/6493591/suppl_imgs/pie_charts_user_study.png)
365
+
366
+ Figure 7: User Study. Preference (%) in terms of identity preservation, and visual quality, comparing LoRA([hu2022lora,](https://arxiv.org/html/2506.01935v1#bib.bib26)), and our method.
367
+
368
+ We conduct a user study to qualitatively and subjectively compare our method against LoRA (see Sec.[E](https://arxiv.org/html/2506.01935v1#A5 "Appendix E User Study Details β€£ Low-Rank Head Avatar Personalization with Registers") for details). The results of our user study are shown in Fig.[7](https://arxiv.org/html/2506.01935v1#A4.F7 "Figure 7 β€£ D.2 User Study β€£ Appendix D Additional Results β€£ Low-Rank Head Avatar Personalization with Registers"). We find that 78.2%percent 78.2 78.2\%78.2 % of the users prefer our method as compared to LoRA in terms of identity preservation. Furthermore, we find that 89.9%percent 89.9 89.9\%89.9 % of the users prefer our results as compared to LoRA in terms of visual quality.
369
+
370
+ ### D.3 Additional Qualitative Results
371
+
372
+ ![Image 8: Refer to caption](https://arxiv.org/html/2506.01935v1/extracted/6493591/suppl_imgs/visualization_of_capture_features.png)
373
+
374
+ Figure 8: We show facial details that are well captured by our method with Register Module, such as wrinkles and skin folds are realistic and have higher quality than vanilla LoRA. Please note the enlarged insets of specific details.
375
+
376
+ ![Image 9: Refer to caption](https://arxiv.org/html/2506.01935v1/extracted/6493591/suppl_imgs/additional_results_vfhqtest_7x6_extremely_highres_new.png)
377
+
378
+ Figure 9: Additional results of personalized head avatar generation on VFHQ Test. Please zoom in for better details.
379
+
380
+ ![Image 10: Refer to caption](https://arxiv.org/html/2506.01935v1/extracted/6493591/suppl_imgs/additional_results_rareface-50_7x6_extremely_highres_new.png)
381
+
382
+ Figure 10: Additional results of personalized head avatar generation on our RareFace-50 dataset. Please zoom in for better details.
383
+
384
+ ![Image 11: Refer to caption](https://arxiv.org/html/2506.01935v1/extracted/6493591/suppl_imgs/norm_visualization_labeled_facetattoo_7x5.png)
385
+
386
+ Figure 11: Visualization of learned features by the Register Module on our RareFace-50 dataset. We visualize the 1) source image’s DINOv2 feature f d⁒e⁒n⁒s⁒e s⁒r⁒c subscript superscript 𝑓 𝑠 π‘Ÿ 𝑐 𝑑 𝑒 𝑛 𝑠 𝑒 f^{src}_{dense}italic_f start_POSTSUPERSCRIPT italic_s italic_r italic_c end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_d italic_e italic_n italic_s italic_e end_POSTSUBSCRIPT, 2) f p⁒r⁒o⁒cβˆ’1 subscript 𝑓 𝑝 π‘Ÿ π‘œ 𝑐 1 f_{proc-1}italic_f start_POSTSUBSCRIPT italic_p italic_r italic_o italic_c - 1 end_POSTSUBSCRIPT, output from E p⁒r⁒o⁒cβˆ’1 subscript 𝐸 𝑝 π‘Ÿ π‘œ 𝑐 1 E_{proc-1}italic_E start_POSTSUBSCRIPT italic_p italic_r italic_o italic_c - 1 end_POSTSUBSCRIPT, and 3) f r⁒e⁒g subscript 𝑓 π‘Ÿ 𝑒 𝑔 f_{reg}italic_f start_POSTSUBSCRIPT italic_r italic_e italic_g end_POSTSUBSCRIPT, output of the Register Module. We compute norms of the features along the embedding dimensions and standardize values.
387
+
388
+ ![Image 12: Refer to caption](https://arxiv.org/html/2506.01935v1/extracted/6493591/suppl_imgs/norm_visualization_labeled_michaeldouglas_7x5.png)
389
+
390
+ Figure 12: Visualization of learned features by the Register Module on our RareFace-50 dataset. We visualize the 1) source image’s DINOv2 feature f d⁒e⁒n⁒s⁒e s⁒r⁒c subscript superscript 𝑓 𝑠 π‘Ÿ 𝑐 𝑑 𝑒 𝑛 𝑠 𝑒 f^{src}_{dense}italic_f start_POSTSUPERSCRIPT italic_s italic_r italic_c end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_d italic_e italic_n italic_s italic_e end_POSTSUBSCRIPT, 2) f p⁒r⁒o⁒cβˆ’1 subscript 𝑓 𝑝 π‘Ÿ π‘œ 𝑐 1 f_{proc-1}italic_f start_POSTSUBSCRIPT italic_p italic_r italic_o italic_c - 1 end_POSTSUBSCRIPT, output from E p⁒r⁒o⁒cβˆ’1 subscript 𝐸 𝑝 π‘Ÿ π‘œ 𝑐 1 E_{proc-1}italic_E start_POSTSUBSCRIPT italic_p italic_r italic_o italic_c - 1 end_POSTSUBSCRIPT, and 3) f r⁒e⁒g subscript 𝑓 π‘Ÿ 𝑒 𝑔 f_{reg}italic_f start_POSTSUBSCRIPT italic_r italic_e italic_g end_POSTSUBSCRIPT, output of the Register Module. We compute norms of the features along the embedding dimensions and standardize values.
391
+
392
+ ![Image 13: Refer to caption](https://arxiv.org/html/2506.01935v1/extracted/6493591/suppl_imgs/pca_1_viz_2x5_ft.png)
393
+
394
+ Figure 13: Visualization of learned features by the Register Module on our RareFace-50 dataset. We compute the 1st channel-wise PCA component and standardize the values.
395
+
396
+ ![Image 14: Refer to caption](https://arxiv.org/html/2506.01935v1/extracted/6493591/suppl_imgs/pca_2_viz_2x5_ft.png)
397
+
398
+ Figure 14: Visualization of learned features by the Register Module on our RareFace-50 dataset. We compute the 2nd channel-wise PCA component and standardize the values.
399
+
400
+ ![Image 15: Refer to caption](https://arxiv.org/html/2506.01935v1/extracted/6493591/suppl_imgs/pca_3_viz_2x5_ft.png)
401
+
402
+ Figure 15: Visualization of learned features by the Register Module on our RareFace-50 dataset. We compute the 3rd channel-wise PCA component and standardize the values.
403
+
404
+ In Fig.[8](https://arxiv.org/html/2506.01935v1#A4.F8 "Figure 8 β€£ D.3 Additional Qualitative Results β€£ Appendix D Additional Results β€£ Low-Rank Head Avatar Personalization with Registers"), we show that, compared to LoRA, our method effectively captures high-frequency details like wrinkles. We show additional results of our method against the baselines on VFHQ Test in Fig.[9](https://arxiv.org/html/2506.01935v1#A4.F9 "Figure 9 β€£ D.3 Additional Qualitative Results β€£ Appendix D Additional Results β€£ Low-Rank Head Avatar Personalization with Registers") and on RareFace-50 in Fig.[10](https://arxiv.org/html/2506.01935v1#A4.F10 "Figure 10 β€£ D.3 Additional Qualitative Results β€£ Appendix D Additional Results β€£ Low-Rank Head Avatar Personalization with Registers"). Fig[11](https://arxiv.org/html/2506.01935v1#A4.F11 "Figure 11 β€£ D.3 Additional Qualitative Results β€£ Appendix D Additional Results β€£ Low-Rank Head Avatar Personalization with Registers") and [12](https://arxiv.org/html/2506.01935v1#A4.F12 "Figure 12 β€£ D.3 Additional Qualitative Results β€£ Appendix D Additional Results β€£ Low-Rank Head Avatar Personalization with Registers") show visualizations of feature norms along the channel dimensions for two identities from RareFaces-50. The values are visualized using colormaps between [βˆ’3⁒σ,3⁒σ]3 𝜎 3 𝜎[-3\sigma,3\sigma][ - 3 italic_Οƒ , 3 italic_Οƒ ] for f d⁒e⁒n⁒s⁒e s⁒r⁒c subscript superscript 𝑓 𝑠 π‘Ÿ 𝑐 𝑑 𝑒 𝑛 𝑠 𝑒 f^{src}_{dense}italic_f start_POSTSUPERSCRIPT italic_s italic_r italic_c end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_d italic_e italic_n italic_s italic_e end_POSTSUBSCRIPT, and f r⁒e⁒g subscript 𝑓 π‘Ÿ 𝑒 𝑔 f_{reg}italic_f start_POSTSUBSCRIPT italic_r italic_e italic_g end_POSTSUBSCRIPT and [βˆ’Οƒ,Οƒ]𝜎 𝜎[-\sigma,\sigma][ - italic_Οƒ , italic_Οƒ ] for f p⁒r⁒o⁒cβˆ’1 subscript 𝑓 𝑝 π‘Ÿ π‘œ 𝑐 1 f_{proc-1}italic_f start_POSTSUBSCRIPT italic_p italic_r italic_o italic_c - 1 end_POSTSUBSCRIPT. We visualize the first four channel-wise PCA components in Fig.[13](https://arxiv.org/html/2506.01935v1#A4.F13 "Figure 13 β€£ D.3 Additional Qualitative Results β€£ Appendix D Additional Results β€£ Low-Rank Head Avatar Personalization with Registers") to [15](https://arxiv.org/html/2506.01935v1#A4.F15 "Figure 15 β€£ D.3 Additional Qualitative Results β€£ Appendix D Additional Results β€£ Low-Rank Head Avatar Personalization with Registers"). We observe that the Register Module improves the learning signals by highlighting face regions and dampening the background regions.
405
+
406
+ Appendix E User Study Details
407
+ -----------------------------
408
+
409
+ ![Image 16: Refer to caption](https://arxiv.org/html/2506.01935v1/extracted/6493591/suppl_imgs/user_study_details.png)
410
+
411
+ Figure 16: User Study Interface. We ask each user to watch 8 videos and answer which method preserves the source image identity and which method has the best visual quality.
412
+
413
+ As mentioned in Sec.[D.2](https://arxiv.org/html/2506.01935v1#A4.SS2 "D.2 User Study β€£ Appendix D Additional Results β€£ Low-Rank Head Avatar Personalization with Registers"), we qualitatively compare our method with vanilla LoRA as an adaptation method with a user study. We describe the details of the user study here. Fig.[16](https://arxiv.org/html/2506.01935v1#A5.F16 "Figure 16 β€£ Appendix E User Study Details β€£ Low-Rank Head Avatar Personalization with Registers") shows the interface that we use for this user study. A total of 18 users responded to our user study. We generate head avatars given source videos from VFHQ Test and RareFace-50 using our method and LoRA. The outputs are placed side by side and the left-right orders are assigned randomly to make sure that the users are unaware of which method is ours. Each generated video is β‰ˆ\approxβ‰ˆ 5 to 10 seconds long, concantenated with the source identity image and the driving video. Users are asked two questions: β€œWhich method’s avatar best looks like the source image identity?” and β€œWhich method avatar has better visual quality?” The users can choose as answer β€œMethod A” or β€œMethod B” or both. Label β€œMethod A” is placed to the left of label β€œMethod B” and the generated videos are randomly placed in terms of a left-right order. The answers are collected through a google form. The videos are attached to the google form using a link to google drive, and the users are encouraged to download the videos to view them on their system. This is done to make sure that differences in high resolution are evident to the users.
414
+
415
+ Appendix F Discussions
416
+ ----------------------
417
+
418
+ ### F.1 Limitations
419
+
420
+ An important factor of our method is the 3DMM fitting that is used to extract the head pose, camera parameters, and 3DMM mesh parameters (see Sec. 3.3 of the main paper). This fitting can be noisy and the error can be propagated to the final generated videos. Improving the face tracking further would be an interesting future work. Further, 3DMM fitting does not model asymmetric/extreme expressions (such as winking) and the movement of the tongue, which is another interesting line of work to pursue.
421
+
422
+ ### F.2 Ethical Considerations and Broader Impacts
423
+
424
+ While our method has significant promise across diverse applications, it also carries the risk of abuse β€” for example, in creating β€œdeep fakes”. These can be used by users with malicious intent to spread misinformation. To prevent this, it is imperative to develop forensic tools to detect fake videos [cai2022marlin](https://arxiv.org/html/2506.01935v1#bib.bib3); [reiss2023detecting](https://arxiv.org/html/2506.01935v1#bib.bib45). We intend to share our code, dataset and models to improve this research, in which we will release them with strict licenses that only allow usage for academic research. When used ethically and responsibly, our method can offer profound benefits across industries β€” from video conferencing to the entertainment sector. In addition, we have also put appropriate procedures (see Sec.[C](https://arxiv.org/html/2506.01935v1#A3 "Appendix C Data Collection β€£ Low-Rank Head Avatar Personalization with Registers")) to ensure fair and safe use of videos from the dataset we collect.