File size: 13,045 Bytes
5dbce35 83b09ef d60a5a0 5dbce35 83b09ef d60a5a0 83b09ef d60a5a0 83b09ef d60a5a0 83b09ef d60a5a0 83b09ef 712635e d60a5a0 83b09ef 4ca917a dfdcf42 83b09ef d60a5a0 83b09ef d60a5a0 83b09ef d60a5a0 83b09ef d60a5a0 83b09ef d60a5a0 83b09ef 524e3eb 8e46862 524e3eb ddbe1fd d60a5a0 83b09ef | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 | ---
language:
- en
library_name: diffusers
license: mit
pipeline_tag: image-to-image
---
# ID-Consistent, Precise Expression Generation with Blendshape-Guided Diffusion (Arc2Face Extension)
<div align="center">
[**Project Page**](https://arc2face.github.io/) **|** [**Expression Adapter Paper (Hugging Face)**](https://huggingface.co/papers/2510.04706) **|** [**Original Arc2Face Paper (ArXiv)**](https://arxiv.org/abs/2403.11641) **|** [**Code**](https://github.com/foivospar/Arc2Face) **|** [🤗 **Gradio demo**](https://huggingface.co/spaces/FoivosPar/Arc2Face)
</div>
## Introduction
This repository hosts the **Arc2Face** model, extended with **ID-Consistent, Precise Expression Generation with Blendshape-Guided Diffusion**. Originally, Arc2Face is an ID-conditioned face model designed to generate diverse, ID-consistent photos of a person given only its ArcFace ID-embedding. This extension enhances Arc2Face with a fine-grained Expression Adapter, enabling the generation of any subject under any particular facial expression. It adopts a compositional design featuring an expression cross-attention module guided by FLAME blendshape parameters for explicit control. Trained on a diverse mixture of image and video data rich in expressive variation, this adapter generalizes beyond basic emotions to subtle micro-expressions and expressive transitions. Additionally, a pluggable Reference Adapter enables expression editing in real images by transferring the appearance from a reference frame during synthesis.
## Model Details
Arc2Face consists of 2 core components:
- **Encoder**: a finetuned CLIP ViT-L/14 model, tailored for projecting ID-embeddings to the CLIP latent space.
- **Arc2Face UNet**: a finetuned UNet model, adapted from [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) for ID-to-face generation, conditioned solely on ID vectors.
## ControlNet (pose)
We also provide a ControlNet model trained on top of Arc2Face for pose control.
<div align="center">
<img src='assets/controlnet_short.jpg'>
</div>
## Arc2Face + Expression Adapter
Our extension ["ID-Consistent, Precise Expression Generation with Blendshape-Guided Diffusion"](https://huggingface.co/papers/2510.04706) combines Arc2Face with a custom IP-Adapter designed for generating ID-consistent images with precise expression control based on FLAME blendshape parameters. We also provide an optional Reference Adapter which can be used to condition the output directly on the input image, i.e. preserving the subject's appearance and background (to an extent). You can find more details in the report.
<div align="center">
<img src='https://github.com/foivospar/Arc2Face/raw/main/assets/arc2face_exp.jpg'>
</div>
## Download Core Models (Arc2Face & ControlNet)
The core Arc2Face and ControlNet models can be downloaded directly from this repository or using python:
```python
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="arc2face/config.json", local_dir="./models")
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="arc2face/diffusion_pytorch_model.safetensors", local_dir="./models")
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="encoder/config.json", local_dir="./models")
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="encoder/pytorch_model.bin", local_dir="./models")
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="controlnet/config.json", local_dir="./models")
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="controlnet/diffusion_pytorch_model.safetensors", local_dir="./models")
```
## Download Expression Adapter Models
Download the Expression and Reference Adapters:
```python
from huggingface_hub import hf_hub_download
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="exp_adapter/exp_adapter.bin", local_dir="./models")
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="ref_adapter/pytorch_lora_weights.safetensors", local_dir="./models")
```
## Download Third-Party Models
1) For face detection and ID-embedding extraction, manually download the [antelopev2](https://github.com/deepinsight/insightface/tree/master/python-package#model-zoo) package ([direct link](https://drive.google.com/file/d/18wEUfMNohBJ4K3Ly5wpTejPfDzp-8fI8/view)) and place the checkpoints under `models/antelopev2`.
2) We use an ArcFace recognition model trained on WebFace42M. Download `arcface.onnx` from [HuggingFace](https://huggingface.co/FoivosPar/Arc2Face) and put it in `models/antelopev2` or using python:
```python
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="arcface.onnx", local_dir="./models/antelopev2")
```
3) Then **delete** `glintr100.onnx` (the default backbone from insightface).
The `models` folder structure should finally be:
```
. ── models ──┌── antelopev2
├── arc2face
└── encoder
```
4) For the Expression Adapter, we use the [SMIRK](https://github.com/georgeretsi/smirk) method to extract FLAME expression parameters from the target image. Download the required checkpoints **face_landmarker.task** and **SMIRK_em1.pt** and put them under `models/smirk`:
```bash
mkdir models/smirk
wget https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/latest/face_landmarker.task --directory-prefix models/smirk
pip install gdown
gdown --id 1T65uEd9dVLHgVw5KiUYL66NUee-MCzoE -O models/smirk/
```
## Sample Usage (Original Arc2Face)
Load pipeline using [diffusers](https://huggingface.co/docs/diffusers/index):
```python
from diffusers import (
StableDiffusionPipeline,
UNet2DConditionModel,
DPMSolverMultistepScheduler,
)
from arc2face import CLIPTextModelWrapper, project_face_embs
import torch
from insightface.app import FaceAnalysis
from PIL import Image
import numpy as np
# Arc2Face is built upon SD1.5
# The repo below can be used instead of the now deprecated 'runwayml/stable-diffusion-v1-5'
base_model = 'stable-diffusion-v1-5/stable-diffusion-v1-5'
encoder = CLIPTextModelWrapper.from_pretrained(
'models', subfolder="encoder", torch_dtype=torch.float16
)
unet = UNet2DConditionModel.from_pretrained(
'models', subfolder="arc2face", torch_dtype=torch.float16
)
pipeline = StableDiffusionPipeline.from_pretrained(
base_model,
text_encoder=encoder,
unet=unet,
torch_dtype=torch.float16,
safety_checker=None
)
```
You can use any SD-compatible schedulers and steps, just like with Stable Diffusion. By default, we use `DPMSolverMultistepScheduler` with 25 steps, which produces very good results in just a few seconds.
```python
pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config)
pipeline = pipeline.to('cuda')
```
Pick an image and extract the ID-embedding:
```python
app = FaceAnalysis(name='antelopev2', root='./', providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
app.prepare(ctx_id=0, det_size=(640, 640))
img = np.array(Image.open('assets/examples/joacquin.png'))[:,:,::-1]
faces = app.get(img)
faces = sorted(faces, key=lambda x:(x['bbox'][2]-x['bbox'][0])*(x['bbox'][3]-x['bbox'][1]))[-1] # select largest face (if more than one detected)
id_emb = torch.tensor(faces['embedding'], dtype=torch.float16)[None].cuda()
id_emb = id_emb/torch.norm(id_emb, dim=1, keepdim=True) # normalize embedding
id_emb = project_face_embs(pipeline, id_emb) # pass through the encoder
```
<div align="center">
<img src='https://github.com/foivospar/Arc2Face/raw/main/assets/examples/joacquin.png' style='width:25%;'>
</div>
Generate images:
```python
num_images = 4
images = pipeline(prompt_embeds=id_emb, num_inference_steps=25, guidance_scale=3.0, num_images_per_prompt=num_images).images
```
<div align="center">
<img src='https://github.com/foivospar/Arc2Face/raw/main/assets/samples.jpg'>
</div>
## Sample Usage (Expression Adapter)
To run the local Gradio demo for the Expression Adapter, after downloading the necessary models as described above, simply run:
```bash
python gradio_demo/app_exp_adapter.py
```
## LCM-LoRA acceleration
[LCM-LoRA](https://arxiv.org/abs/2311.05556) allows you to reduce the sampling steps to as few as 2-4 for super-fast inference. Just plug in the pre-trained distillation adapter for SD v1.5 and switch to `LCMScheduler`:
```python
from diffusers import LCMScheduler
pipeline.load_lora_weights("latent-consistency/lcm-lora-sdv1-5")
pipeline.scheduler = LCMScheduler.from_config(pipeline.scheduler.config)
```
Then, you can sample with as few as 2 steps (and disable `guidance_scale` by using a value of 1.0, as LCM is very sensitive to it and even small values lead to oversaturation):
```python
images = pipeline(prompt_embeds=id_emb, num_inference_steps=2, guidance_scale=1.0, num_images_per_prompt=num_images).images
```
Note that this technique accelerates sampling in exchange for a slight drop in quality.
## Start a local gradio demo
You can start a local demo for inference by running:
```bash
python gradio_demo/app.py
```
## Arc2Face + ControlNet (pose)
<div align="center">
<img src='https://github.com/foivospar/Arc2Face/raw/main/assets/controlnet.jpg'>
</div>
We provide a ControlNet model trained on top of Arc2Face for pose control. We use [EMOCA](https://github.com/radekd91/emoca) for 3D pose extraction. To run our demo, follow the steps below:
### 1) Pull EMOCA
```bash
git submodule update --init external/emoca
```
### 2) Installation
This is the most tricky part. You will need PyTorch3D to run EMOCA. As its installation may cause conflicts, we suggest to follow the process below:
1) Create a new environment and start by installing PyTorch3D with GPU support first (follow the official [instructions](https://github.com/facebookresearch/pytorch3d/blob/main/INSTALL.md)).
2) Add Arc2Face + EMOCA requirements with:
```bash
pip install -r requirements_controlnet.txt
```
3) Install EMOCA code:
```bash
pip install -e external/emoca
```
4) Finally, you need to download the EMOCA/FLAME assets. Run the following and follow the instructions in the terminal:
```bash
cd external/emoca/gdl_apps/EMOCA/demos
bash download_assets.sh
cd ../../../../..
```
### 3) Start a local gradio demo
You can start a local ControlNet demo by running:
```bash
python gradio_demo/app_controlnet.py
```
## Limitations and Bias
- Only one person per image can be generated.
- Poses are constrained to the frontal hemisphere, similar to FFHQ images.
- The model may reflect the biases of the training data or the ID encoder.
## Test Data
The test images used for comparisons in the paper (Synth-500, AgeDB) are available [here](https://drive.google.com/drive/folders/1exnvCECmqWcqNIFCck2EQD-hkE42Ayjc?usp=sharing). Please use them only for evaluation purposes and make sure to cite the corresponding [sources](https://ibug.doc.ic.ac.uk/resources/agedb/) when using them.
## Community Resources
### Replicate Demo
- [Demo link](https://replicate.com/camenduru/arc2face) by [@camenduru](https://github.com/camenduru).
### ComfyUI
- [caleboleary/ComfyUI-Arc2Face](https://github.com/caleboleary/ComfyUI-Arc2Face) by [@caleboleary](https://github.com/caleboleary).
### Pinokio
- Pinokio [implementation](https://pinokio.computer/item?uri=https://github.com/cocktailpeanutlabs/arc2face) by [@cocktailpeanut](https://github.com/cocktailpeanut) (runs locally on all OS - Windows, Mac, Linux).
## Acknowledgements
- Thanks to the creators of Stable Diffusion and the HuggingFace [diffusers](https://github.com/huggingface/diffusers) team for the awesome work ❤️.
- Thanks to the WebFace42M creators for providing such a million-scale facial dataset ❤️.
- Thanks to the HuggingFace team for their generous support through the community GPU grant for our demo ❤️.
- We also acknowledge the invaluable support of the HPC resources provided by the Erlangen National High Performance Computing Center (NHR@FAU) of the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), which made the training of Arc2Face possible.
## Citation
If you find Arc2Face useful for your research, please consider citing us:
```bibtex
@inproceedings{paraperas2024arc2face,
title={Arc2Face: A Foundation Model for ID-Consistent Human Faces},
author={Paraperas Papantoniou, Foivos and Lattas, Alexandros and Moschoglou, Stylianos and Deng, Jiankang and Kainz, Bernhard and Zafeiriou, Stefanos},
booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
year={2024}
}
```
Additionally, if you use the Expression Adapter, please also cite the extension:
```bibtex
@inproceedings{paraperas2025arc2face_exp,
title={ID-Consistent, Precise Expression Generation with Blendshape-Guided Diffusion},
author={Paraperas Papantoniou, Foivos and Zafeiriou, Stefanos},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
year={2025}
}
``` |