Update model card for Arc2Face Expression Adapter extension and add pipeline tag
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,38 +1,28 @@
|
|
| 1 |
---
|
| 2 |
-
license: mit
|
| 3 |
language:
|
| 4 |
- en
|
| 5 |
library_name: diffusers
|
|
|
|
|
|
|
| 6 |
---
|
| 7 |
|
| 8 |
-
# Arc2Face
|
| 9 |
|
| 10 |
<div align="center">
|
| 11 |
|
| 12 |
-
[**Project Page**](https://arc2face.github.io/) **|** [**Paper (ArXiv)**](https://arxiv.org/abs/2403.11641) **|** [**Code**](https://github.com/foivospar/Arc2Face) **|** [🤗 **Gradio demo**](https://huggingface.co/spaces/FoivosPar/Arc2Face)
|
| 13 |
-
|
| 14 |
-
|
| 15 |
|
| 16 |
</div>
|
| 17 |
|
| 18 |
## Introduction
|
| 19 |
|
| 20 |
-
Arc2Face is an ID-conditioned face model
|
| 21 |
-
It is trained on a restored version of the WebFace42M face recognition database, and is further fine-tuned on FFHQ and CelebA-HQ.
|
| 22 |
-
|
| 23 |
-
<div align="center">
|
| 24 |
-
<img src='assets/samples_short.jpg'>
|
| 25 |
-
</div>
|
| 26 |
|
| 27 |
## Model Details
|
| 28 |
|
| 29 |
-
|
| 30 |
-
-
|
| 31 |
-
-
|
| 32 |
-
|
| 33 |
-
both of which are fine-tuned from [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5).
|
| 34 |
-
The encoder is tailored for projecting ID-embeddings to the CLIP latent space.
|
| 35 |
-
Arc2Face adapts the pre-trained backbone to the task of ID-to-face generation, conditioned solely on ID vectors.
|
| 36 |
|
| 37 |
## ControlNet (pose)
|
| 38 |
|
|
@@ -42,9 +32,18 @@ We also provide a ControlNet model trained on top of Arc2Face for pose control.
|
|
| 42 |
<img src='assets/controlnet_short.jpg'>
|
| 43 |
</div>
|
| 44 |
|
| 45 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 46 |
|
| 47 |
-
The models can be downloaded directly from this repository or using python:
|
| 48 |
```python
|
| 49 |
from huggingface_hub import hf_hub_download
|
| 50 |
|
|
@@ -56,18 +55,204 @@ hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="controlnet/config.json",
|
|
| 56 |
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="controlnet/diffusion_pytorch_model.safetensors", local_dir="./models")
|
| 57 |
```
|
| 58 |
|
| 59 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 60 |
|
| 61 |
## Limitations and Bias
|
| 62 |
|
| 63 |
-
-
|
| 64 |
-
-
|
| 65 |
-
-
|
| 66 |
|
| 67 |
-
##
|
|
|
|
|
|
|
| 68 |
|
|
|
|
| 69 |
|
| 70 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 71 |
|
| 72 |
```bibtex
|
| 73 |
@inproceedings{paraperas2024arc2face,
|
|
@@ -76,4 +261,14 @@ Please check our [GitHub repository](https://github.com/foivospar/Arc2Face) for
|
|
| 76 |
booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
|
| 77 |
year={2024}
|
| 78 |
}
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 79 |
```
|
|
|
|
| 1 |
---
|
|
|
|
| 2 |
language:
|
| 3 |
- en
|
| 4 |
library_name: diffusers
|
| 5 |
+
license: mit
|
| 6 |
+
pipeline_tag: image-to-image
|
| 7 |
---
|
| 8 |
|
| 9 |
+
# ID-Consistent, Precise Expression Generation with Blendshape-Guided Diffusion (Arc2Face Extension)
|
| 10 |
|
| 11 |
<div align="center">
|
| 12 |
|
| 13 |
+
[**Project Page**](https://arc2face.github.io/) **|** [**Expression Adapter Paper (Hugging Face)**](https://huggingface.co/papers/2510.04706) **|** [**Original Arc2Face Paper (ArXiv)**](https://arxiv.org/abs/2403.11641) **|** [**Code**](https://github.com/foivospar/Arc2Face) **|** [🤗 **Gradio demo**](https://huggingface.co/spaces/FoivosPar/Arc2Face)
|
|
|
|
|
|
|
| 14 |
|
| 15 |
</div>
|
| 16 |
|
| 17 |
## Introduction
|
| 18 |
|
| 19 |
+
This repository hosts the **Arc2Face** model, extended with **ID-Consistent, Precise Expression Generation with Blendshape-Guided Diffusion**. Originally, Arc2Face is an ID-conditioned face model designed to generate diverse, ID-consistent photos of a person given only its ArcFace ID-embedding. This extension enhances Arc2Face with a fine-grained Expression Adapter, enabling the generation of any subject under any particular facial expression. It adopts a compositional design featuring an expression cross-attention module guided by FLAME blendshape parameters for explicit control. Trained on a diverse mixture of image and video data rich in expressive variation, this adapter generalizes beyond basic emotions to subtle micro-expressions and expressive transitions. Additionally, a pluggable Reference Adapter enables expression editing in real images by transferring the appearance from a reference frame during synthesis.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 20 |
|
| 21 |
## Model Details
|
| 22 |
|
| 23 |
+
Arc2Face consists of 2 core components:
|
| 24 |
+
- **Encoder**: a finetuned CLIP ViT-L/14 model, tailored for projecting ID-embeddings to the CLIP latent space.
|
| 25 |
+
- **Arc2Face UNet**: a finetuned UNet model, adapted from [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) for ID-to-face generation, conditioned solely on ID vectors.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
|
| 27 |
## ControlNet (pose)
|
| 28 |
|
|
|
|
| 32 |
<img src='assets/controlnet_short.jpg'>
|
| 33 |
</div>
|
| 34 |
|
| 35 |
+
## Arc2Face + Expression Adapter
|
| 36 |
+
|
| 37 |
+
Our extension ["ID-Consistent, Precise Expression Generation with Blendshape-Guided Diffusion"](https://huggingface.co/papers/2510.04706) combines Arc2Face with a custom IP-Adapter designed for generating ID-consistent images with precise expression control based on FLAME blendshape parameters. We also provide an optional Reference Adapter which can be used to condition the output directly on the input image, i.e. preserving the subject's appearance and background (to an extent). You can find more details in the report.
|
| 38 |
+
|
| 39 |
+
<div align="center">
|
| 40 |
+
<img src='https://github.com/foivospar/Arc2Face/raw/main/assets/arc2face_exp.jpg'>
|
| 41 |
+
</div>
|
| 42 |
+
|
| 43 |
+
## Download Core Models (Arc2Face & ControlNet)
|
| 44 |
+
|
| 45 |
+
The core Arc2Face and ControlNet models can be downloaded directly from this repository or using python:
|
| 46 |
|
|
|
|
| 47 |
```python
|
| 48 |
from huggingface_hub import hf_hub_download
|
| 49 |
|
|
|
|
| 55 |
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="controlnet/diffusion_pytorch_model.safetensors", local_dir="./models")
|
| 56 |
```
|
| 57 |
|
| 58 |
+
## Download Expression Adapter Models
|
| 59 |
+
|
| 60 |
+
Download the Expression and Reference Adapters:
|
| 61 |
+
|
| 62 |
+
```python
|
| 63 |
+
from huggingface_hub import hf_hub_download
|
| 64 |
+
|
| 65 |
+
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="exp_adapter/exp_adapter.bin", local_dir="./models")
|
| 66 |
+
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="ref_adapter/pytorch_lora_weights.safetensors", local_dir="./models")
|
| 67 |
+
```
|
| 68 |
+
|
| 69 |
+
## Download Third-Party Models
|
| 70 |
+
|
| 71 |
+
1) For face detection and ID-embedding extraction, manually download the [antelopev2](https://github.com/deepinsight/insightface/tree/master/python-package#model-zoo) package ([direct link](https://drive.google.com/file/d/18wEUfMNohBJ4K3Ly5wpTejPfDzp-8fI8/view)) and place the checkpoints under `models/antelopev2`.
|
| 72 |
+
2) We use an ArcFace recognition model trained on WebFace42M. Download `arcface.onnx` from [HuggingFace](https://huggingface.co/FoivosPar/Arc2Face) and put it in `models/antelopev2` or using python:
|
| 73 |
+
```python
|
| 74 |
+
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="arcface.onnx", local_dir="./models/antelopev2")
|
| 75 |
+
```
|
| 76 |
+
3) Then **delete** `glintr100.onnx` (the default backbone from insightface).
|
| 77 |
+
|
| 78 |
+
The `models` folder structure should finally be:
|
| 79 |
+
```
|
| 80 |
+
. ── models ──┌── antelopev2
|
| 81 |
+
├── arc2face
|
| 82 |
+
└── encoder
|
| 83 |
+
```
|
| 84 |
+
|
| 85 |
+
4) For the Expression Adapter, we use the [SMIRK](https://github.com/georgeretsi/smirk) method to extract FLAME expression parameters from the target image. Download the required checkpoints **face_landmarker.task** and **SMIRK_em1.pt** and put them under `models/smirk`:
|
| 86 |
+
```bash
|
| 87 |
+
mkdir models/smirk
|
| 88 |
+
wget https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/latest/face_landmarker.task --directory-prefix models/smirk
|
| 89 |
+
pip install gdown
|
| 90 |
+
gdown --id 1T65uEd9dVLHgVw5KiUYL66NUee-MCzoE -O models/smirk/
|
| 91 |
+
```
|
| 92 |
+
|
| 93 |
+
## Sample Usage (Original Arc2Face)
|
| 94 |
+
|
| 95 |
+
Load pipeline using [diffusers](https://huggingface.co/docs/diffusers/index):
|
| 96 |
+
```python
|
| 97 |
+
from diffusers import (
|
| 98 |
+
StableDiffusionPipeline,
|
| 99 |
+
UNet2DConditionModel,
|
| 100 |
+
DPMSolverMultistepScheduler,
|
| 101 |
+
)
|
| 102 |
+
|
| 103 |
+
from arc2face import CLIPTextModelWrapper, project_face_embs
|
| 104 |
+
|
| 105 |
+
import torch
|
| 106 |
+
from insightface.app import FaceAnalysis
|
| 107 |
+
from PIL import Image
|
| 108 |
+
import numpy as np
|
| 109 |
+
|
| 110 |
+
# Arc2Face is built upon SD1.5
|
| 111 |
+
# The repo below can be used instead of the now deprecated 'runwayml/stable-diffusion-v1-5'
|
| 112 |
+
base_model = 'stable-diffusion-v1-5/stable-diffusion-v1-5'
|
| 113 |
+
|
| 114 |
+
encoder = CLIPTextModelWrapper.from_pretrained(
|
| 115 |
+
'models', subfolder="encoder", torch_dtype=torch.float16
|
| 116 |
+
)
|
| 117 |
+
|
| 118 |
+
unet = UNet2DConditionModel.from_pretrained(
|
| 119 |
+
'models', subfolder="arc2face", torch_dtype=torch.float16
|
| 120 |
+
)
|
| 121 |
+
|
| 122 |
+
pipeline = StableDiffusionPipeline.from_pretrained(
|
| 123 |
+
base_model,
|
| 124 |
+
text_encoder=encoder,
|
| 125 |
+
unet=unet,
|
| 126 |
+
torch_dtype=torch.float16,
|
| 127 |
+
safety_checker=None
|
| 128 |
+
)
|
| 129 |
+
```
|
| 130 |
+
You can use any SD-compatible schedulers and steps, just like with Stable Diffusion. By default, we use `DPMSolverMultistepScheduler` with 25 steps, which produces very good results in just a few seconds.
|
| 131 |
+
```python
|
| 132 |
+
pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config)
|
| 133 |
+
pipeline = pipeline.to('cuda')
|
| 134 |
+
```
|
| 135 |
+
Pick an image and extract the ID-embedding:
|
| 136 |
+
```python
|
| 137 |
+
app = FaceAnalysis(name='antelopev2', root='./', providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
|
| 138 |
+
app.prepare(ctx_id=0, det_size=(640, 640))
|
| 139 |
+
|
| 140 |
+
img = np.array(Image.open('assets/examples/joacquin.png'))[:,:,::-1]
|
| 141 |
+
|
| 142 |
+
faces = app.get(img)
|
| 143 |
+
faces = sorted(faces, key=lambda x:(x['bbox'][2]-x['bbox'][0])*(x['bbox'][3]-x['bbox'][1]))[-1] # select largest face (if more than one detected)
|
| 144 |
+
id_emb = torch.tensor(faces['embedding'], dtype=torch.float16)[None].cuda()
|
| 145 |
+
id_emb = id_emb/torch.norm(id_emb, dim=1, keepdim=True) # normalize embedding
|
| 146 |
+
id_emb = project_face_embs(pipeline, id_emb) # pass through the encoder
|
| 147 |
+
```
|
| 148 |
+
|
| 149 |
+
<div align="center">
|
| 150 |
+
<img src='https://github.com/foivospar/Arc2Face/raw/main/assets/examples/joacquin.png' style='width:25%;'>
|
| 151 |
+
</div>
|
| 152 |
+
|
| 153 |
+
Generate images:
|
| 154 |
+
```python
|
| 155 |
+
num_images = 4
|
| 156 |
+
images = pipeline(prompt_embeds=id_emb, num_inference_steps=25, guidance_scale=3.0, num_images_per_prompt=num_images).images
|
| 157 |
+
```
|
| 158 |
+
<div align="center">
|
| 159 |
+
<img src='https://github.com/foivospar/Arc2Face/raw/main/assets/samples.jpg'>
|
| 160 |
+
</div>
|
| 161 |
+
|
| 162 |
+
## Sample Usage (Expression Adapter)
|
| 163 |
+
|
| 164 |
+
To run the local Gradio demo for the Expression Adapter, after downloading the necessary models as described above, simply run:
|
| 165 |
+
```bash
|
| 166 |
+
python gradio_demo/app_exp_adapter.py
|
| 167 |
+
```
|
| 168 |
+
|
| 169 |
+
## LCM-LoRA acceleration
|
| 170 |
+
|
| 171 |
+
[LCM-LoRA](https://arxiv.org/abs/2311.05556) allows you to reduce the sampling steps to as few as 2-4 for super-fast inference. Just plug in the pre-trained distillation adapter for SD v1.5 and switch to `LCMScheduler`:
|
| 172 |
+
```python
|
| 173 |
+
from diffusers import LCMScheduler
|
| 174 |
+
|
| 175 |
+
pipeline.load_lora_weights("latent-consistency/lcm-lora-sdv1-5")
|
| 176 |
+
pipeline.scheduler = LCMScheduler.from_config(pipeline.scheduler.config)
|
| 177 |
+
```
|
| 178 |
+
Then, you can sample with as few as 2 steps (and disable `guidance_scale` by using a value of 1.0, as LCM is very sensitive to it and even small values lead to oversaturation):
|
| 179 |
+
```python
|
| 180 |
+
images = pipeline(prompt_embeds=id_emb, num_inference_steps=2, guidance_scale=1.0, num_images_per_prompt=num_images).images
|
| 181 |
+
```
|
| 182 |
+
Note that this technique accelerates sampling in exchange for a slight drop in quality.
|
| 183 |
+
|
| 184 |
+
## Start a local gradio demo
|
| 185 |
+
|
| 186 |
+
You can start a local demo for inference by running:
|
| 187 |
+
```bash
|
| 188 |
+
python gradio_demo/app.py
|
| 189 |
+
```
|
| 190 |
+
|
| 191 |
+
## Arc2Face + ControlNet (pose)
|
| 192 |
+
|
| 193 |
+
<div align="center">
|
| 194 |
+
<img src='https://github.com/foivospar/Arc2Face/raw/main/assets/controlnet.jpg'>
|
| 195 |
+
</div>
|
| 196 |
+
|
| 197 |
+
We provide a ControlNet model trained on top of Arc2Face for pose control. We use [EMOCA](https://github.com/radekd91/emoca) for 3D pose extraction. To run our demo, follow the steps below:
|
| 198 |
+
### 1) Pull EMOCA
|
| 199 |
+
```bash
|
| 200 |
+
git submodule update --init external/emoca
|
| 201 |
+
```
|
| 202 |
+
### 2) Installation
|
| 203 |
+
This is the most tricky part. You will need PyTorch3D to run EMOCA. As its installation may cause conflicts, we suggest to follow the process below:
|
| 204 |
+
1) Create a new environment and start by installing PyTorch3D with GPU support first (follow the official [instructions](https://github.com/facebookresearch/pytorch3d/blob/main/INSTALL.md)).
|
| 205 |
+
2) Add Arc2Face + EMOCA requirements with:
|
| 206 |
+
```bash
|
| 207 |
+
pip install -r requirements_controlnet.txt
|
| 208 |
+
```
|
| 209 |
+
3) Install EMOCA code:
|
| 210 |
+
```bash
|
| 211 |
+
pip install -e external/emoca
|
| 212 |
+
```
|
| 213 |
+
4) Finally, you need to download the EMOCA/FLAME assets. Run the following and follow the instructions in the terminal:
|
| 214 |
+
```bash
|
| 215 |
+
cd external/emoca/gdl_apps/EMOCA/demos
|
| 216 |
+
bash download_assets.sh
|
| 217 |
+
cd ../../../../..
|
| 218 |
+
```
|
| 219 |
+
### 3) Start a local gradio demo
|
| 220 |
+
You can start a local ControlNet demo by running:
|
| 221 |
+
```bash
|
| 222 |
+
python gradio_demo/app_controlnet.py
|
| 223 |
+
```
|
| 224 |
|
| 225 |
## Limitations and Bias
|
| 226 |
|
| 227 |
+
- Only one person per image can be generated.
|
| 228 |
+
- Poses are constrained to the frontal hemisphere, similar to FFHQ images.
|
| 229 |
+
- The model may reflect the biases of the training data or the ID encoder.
|
| 230 |
|
| 231 |
+
## Test Data
|
| 232 |
+
|
| 233 |
+
The test images used for comparisons in the paper (Synth-500, AgeDB) are available [here](https://drive.google.com/drive/folders/1exnvCECmqWcqNIFCck2EQD-hkE42Ayjc?usp=sharing). Please use them only for evaluation purposes and make sure to cite the corresponding [sources](https://ibug.doc.ic.ac.uk/resources/agedb/) when using them.
|
| 234 |
|
| 235 |
+
## Community Resources
|
| 236 |
|
| 237 |
+
### Replicate Demo
|
| 238 |
+
- [Demo link](https://replicate.com/camenduru/arc2face) by [@camenduru](https://github.com/camenduru).
|
| 239 |
+
|
| 240 |
+
### ComfyUI
|
| 241 |
+
- [caleboleary/ComfyUI-Arc2Face](https://github.com/caleboleary/ComfyUI-Arc2Face) by [@caleboleary](https://github.com/caleboleary).
|
| 242 |
+
|
| 243 |
+
### Pinokio
|
| 244 |
+
- Pinokio [implementation](https://pinokio.computer/item?uri=https://github.com/cocktailpeanutlabs/arc2face) by [@cocktailpeanut](https://github.com/cocktailpeanut) (runs locally on all OS - Windows, Mac, Linux).
|
| 245 |
+
|
| 246 |
+
## Acknowledgements
|
| 247 |
+
|
| 248 |
+
- Thanks to the creators of Stable Diffusion and the HuggingFace [diffusers](https://github.com/huggingface/diffusers) team for the awesome work ❤️.
|
| 249 |
+
- Thanks to the WebFace42M creators for providing such a million-scale facial dataset ❤️.
|
| 250 |
+
- Thanks to the HuggingFace team for their generous support through the community GPU grant for our demo ❤️.
|
| 251 |
+
- We also acknowledge the invaluable support of the HPC resources provided by the Erlangen National High Performance Computing Center (NHR@FAU) of the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU), which made the training of Arc2Face possible.
|
| 252 |
+
|
| 253 |
+
## Citation
|
| 254 |
+
|
| 255 |
+
If you find Arc2Face useful for your research, please consider citing us:
|
| 256 |
|
| 257 |
```bibtex
|
| 258 |
@inproceedings{paraperas2024arc2face,
|
|
|
|
| 261 |
booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
|
| 262 |
year={2024}
|
| 263 |
}
|
| 264 |
+
```
|
| 265 |
+
Additionally, if you use the Expression Adapter, please also cite the extension:
|
| 266 |
+
|
| 267 |
+
```bibtex
|
| 268 |
+
@inproceedings{paraperas2025arc2face_exp,
|
| 269 |
+
title={ID-Consistent, Precise Expression Generation with Blendshape-Guided Diffusion},
|
| 270 |
+
author={Paraperas Papantoniou, Foivos and Zafeiriou, Stefanos},
|
| 271 |
+
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
|
| 272 |
+
year={2025}
|
| 273 |
+
}
|
| 274 |
```
|