Arc2Face / README.md
nielsr's picture
nielsr HF Staff
Update model card for Arc2Face Expression Adapter extension and add pipeline tag
d60a5a0 verified
|
raw
history blame
13 kB
metadata
language:
  - en
library_name: diffusers
license: mit
pipeline_tag: image-to-image

ID-Consistent, Precise Expression Generation with Blendshape-Guided Diffusion (Arc2Face Extension)

Introduction

This repository hosts the Arc2Face model, extended with ID-Consistent, Precise Expression Generation with Blendshape-Guided Diffusion. Originally, Arc2Face is an ID-conditioned face model designed to generate diverse, ID-consistent photos of a person given only its ArcFace ID-embedding. This extension enhances Arc2Face with a fine-grained Expression Adapter, enabling the generation of any subject under any particular facial expression. It adopts a compositional design featuring an expression cross-attention module guided by FLAME blendshape parameters for explicit control. Trained on a diverse mixture of image and video data rich in expressive variation, this adapter generalizes beyond basic emotions to subtle micro-expressions and expressive transitions. Additionally, a pluggable Reference Adapter enables expression editing in real images by transferring the appearance from a reference frame during synthesis.

Model Details

Arc2Face consists of 2 core components:

  • Encoder: a finetuned CLIP ViT-L/14 model, tailored for projecting ID-embeddings to the CLIP latent space.
  • Arc2Face UNet: a finetuned UNet model, adapted from runwayml/stable-diffusion-v1-5 for ID-to-face generation, conditioned solely on ID vectors.

ControlNet (pose)

We also provide a ControlNet model trained on top of Arc2Face for pose control.

Arc2Face + Expression Adapter

Our extension "ID-Consistent, Precise Expression Generation with Blendshape-Guided Diffusion" combines Arc2Face with a custom IP-Adapter designed for generating ID-consistent images with precise expression control based on FLAME blendshape parameters. We also provide an optional Reference Adapter which can be used to condition the output directly on the input image, i.e. preserving the subject's appearance and background (to an extent). You can find more details in the report.

Download Core Models (Arc2Face & ControlNet)

The core Arc2Face and ControlNet models can be downloaded directly from this repository or using python:

from huggingface_hub import hf_hub_download

hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="arc2face/config.json", local_dir="./models")
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="arc2face/diffusion_pytorch_model.safetensors", local_dir="./models")
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="encoder/config.json", local_dir="./models")
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="encoder/pytorch_model.bin", local_dir="./models")
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="controlnet/config.json", local_dir="./models")
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="controlnet/diffusion_pytorch_model.safetensors", local_dir="./models")

Download Expression Adapter Models

Download the Expression and Reference Adapters:

from huggingface_hub import hf_hub_download

hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="exp_adapter/exp_adapter.bin", local_dir="./models")
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="ref_adapter/pytorch_lora_weights.safetensors", local_dir="./models")

Download Third-Party Models

  1. For face detection and ID-embedding extraction, manually download the antelopev2 package (direct link) and place the checkpoints under models/antelopev2.
  2. We use an ArcFace recognition model trained on WebFace42M. Download arcface.onnx from HuggingFace and put it in models/antelopev2 or using python:
    hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="arcface.onnx", local_dir="./models/antelopev2")
    
  3. Then delete glintr100.onnx (the default backbone from insightface).

The models folder structure should finally be:

  . ── models ──┌── antelopev2
                ├── arc2face
                └── encoder
  1. For the Expression Adapter, we use the SMIRK method to extract FLAME expression parameters from the target image. Download the required checkpoints face_landmarker.task and SMIRK_em1.pt and put them under models/smirk:
    mkdir models/smirk
    wget https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/latest/face_landmarker.task --directory-prefix models/smirk
    pip install gdown
    gdown --id 1T65uEd9dVLHgVw5KiUYL66NUee-MCzoE -O models/smirk/
    

Sample Usage (Original Arc2Face)

Load pipeline using diffusers:

from diffusers import (
    StableDiffusionPipeline,
    UNet2DConditionModel,
    DPMSolverMultistepScheduler,
)

from arc2face import CLIPTextModelWrapper, project_face_embs

import torch
from insightface.app import FaceAnalysis
from PIL import Image
import numpy as np

# Arc2Face is built upon SD1.5
# The repo below can be used instead of the now deprecated 'runwayml/stable-diffusion-v1-5'
base_model = 'stable-diffusion-v1-5/stable-diffusion-v1-5'

encoder = CLIPTextModelWrapper.from_pretrained(
    'models', subfolder="encoder", torch_dtype=torch.float16
)

unet = UNet2DConditionModel.from_pretrained(
    'models', subfolder="arc2face", torch_dtype=torch.float16
)

pipeline = StableDiffusionPipeline.from_pretrained(
        base_model,
        text_encoder=encoder,
        unet=unet,
        torch_dtype=torch.float16,
        safety_checker=None
    )

You can use any SD-compatible schedulers and steps, just like with Stable Diffusion. By default, we use DPMSolverMultistepScheduler with 25 steps, which produces very good results in just a few seconds.

pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config)
pipeline = pipeline.to('cuda')

Pick an image and extract the ID-embedding:

app = FaceAnalysis(name='antelopev2', root='./', providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
app.prepare(ctx_id=0, det_size=(640, 640))

img = np.array(Image.open('assets/examples/joacquin.png'))[:,:,::-1]

faces = app.get(img)
faces = sorted(faces, key=lambda x:(x['bbox'][2]-x['bbox'][0])*(x['bbox'][3]-x['bbox'][1]))[-1]  # select largest face (if more than one detected)
id_emb = torch.tensor(faces['embedding'], dtype=torch.float16)[None].cuda()
id_emb = id_emb/torch.norm(id_emb, dim=1, keepdim=True)   # normalize embedding
id_emb = project_face_embs(pipeline, id_emb)    # pass through the encoder

Generate images:

num_images = 4
images = pipeline(prompt_embeds=id_emb, num_inference_steps=25, guidance_scale=3.0, num_images_per_prompt=num_images).images

Sample Usage (Expression Adapter)

To run the local Gradio demo for the Expression Adapter, after downloading the necessary models as described above, simply run:

python gradio_demo/app_exp_adapter.py

LCM-LoRA acceleration

LCM-LoRA allows you to reduce the sampling steps to as few as 2-4 for super-fast inference. Just plug in the pre-trained distillation adapter for SD v1.5 and switch to LCMScheduler:

from diffusers import LCMScheduler

pipeline.load_lora_weights("latent-consistency/lcm-lora-sdv1-5")
pipeline.scheduler = LCMScheduler.from_config(pipeline.scheduler.config)

Then, you can sample with as few as 2 steps (and disable guidance_scale by using a value of 1.0, as LCM is very sensitive to it and even small values lead to oversaturation):

images = pipeline(prompt_embeds=id_emb, num_inference_steps=2, guidance_scale=1.0, num_images_per_prompt=num_images).images

Note that this technique accelerates sampling in exchange for a slight drop in quality.

Start a local gradio demo

You can start a local demo for inference by running:

python gradio_demo/app.py

Arc2Face + ControlNet (pose)

We provide a ControlNet model trained on top of Arc2Face for pose control. We use EMOCA for 3D pose extraction. To run our demo, follow the steps below:

1) Pull EMOCA

git submodule update --init external/emoca

2) Installation

This is the most tricky part. You will need PyTorch3D to run EMOCA. As its installation may cause conflicts, we suggest to follow the process below:

  1. Create a new environment and start by installing PyTorch3D with GPU support first (follow the official instructions).
  2. Add Arc2Face + EMOCA requirements with:
pip install -r requirements_controlnet.txt
  1. Install EMOCA code:
pip install -e external/emoca
  1. Finally, you need to download the EMOCA/FLAME assets. Run the following and follow the instructions in the terminal:
cd external/emoca/gdl_apps/EMOCA/demos 
bash download_assets.sh
cd ../../../../..

3) Start a local gradio demo

You can start a local ControlNet demo by running:

python gradio_demo/app_controlnet.py

Limitations and Bias

  • Only one person per image can be generated.
  • Poses are constrained to the frontal hemisphere, similar to FFHQ images.
  • The model may reflect the biases of the training data or the ID encoder.

Test Data

The test images used for comparisons in the paper (Synth-500, AgeDB) are available here. Please use them only for evaluation purposes and make sure to cite the corresponding sources when using them.

Community Resources

Replicate Demo

ComfyUI

Pinokio

Acknowledgements

  • Thanks to the creators of Stable Diffusion and the HuggingFace diffusers team for the awesome work ❤️.
  • Thanks to the WebFace42M creators for providing such a million-scale facial dataset ❤️.
  • Thanks to the HuggingFace team for their generous support through the community GPU grant for our demo ❤️.
  • We also acknowledge the invaluable support of the HPC resources provided by the Erlangen National High Performance Computing Center (NHR@FAU) of the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), which made the training of Arc2Face possible.

Citation

If you find Arc2Face useful for your research, please consider citing us:

@inproceedings{paraperas2024arc2face,
      title={Arc2Face: A Foundation Model for ID-Consistent Human Faces}, 
      author={Paraperas Papantoniou, Foivos and Lattas, Alexandros and Moschoglou, Stylianos and Deng, Jiankang and Kainz, Bernhard and Zafeiriou, Stefanos},
      booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
      year={2024}
}

Additionally, if you use the Expression Adapter, please also cite the extension:

@inproceedings{paraperas2025arc2face_exp,
      title={ID-Consistent, Precise Expression Generation with Blendshape-Guided Diffusion}, 
      author={Paraperas Papantoniou, Foivos and Zafeiriou, Stefanos},
      booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
      year={2025}
}