File size: 3,415 Bytes
c602b6d
 
ae38511
c602b6d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ae38511
c602b6d
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
---
license: mit
license_link: LICENSE.md
tags:
  - liveportrait
  - face-animation
  - image-to-video
  - portrait
  - safetensors
  - comfyui
  - ffmpega
pipeline_tag: image-to-video
---

# LivePortrait Models (safetensors)

Mirror of the [LivePortrait](https://github.com/KlingAIResearch/LivePortrait) model weights, converted from `.pth` to **safetensors** format for safe, pickle-free loading. Hosted by [Æmotion Studio](https://github.com/AEmotionStudio) for use with [ComfyUI-FFMPEGA](https://github.com/AEmotionStudio/ComfyUI-FFMPEGA).

## What is LivePortrait?

LivePortrait is an AI model for **portrait animation** — it transfers head pose, facial expressions, eye gaze, and lip movements from a driving video onto a source face image. It produces high-quality results in real-time.

**Paper:** [LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control](https://arxiv.org/abs/2407.03168)

## Files

| File | Size | Params | Description |
|:-----|-----:|-------:|:------------|
| `appearance_feature_extractor.safetensors` | 3.2 MB | 836K | Extracts 3D appearance features from face crops |
| `motion_extractor.safetensors` | 107.3 MB | 28.1M | Extracts keypoints, head pose, and expressions |
| `spade_generator.safetensors` | 211.5 MB | 55.4M | SPADE decoder — generates output face images |
| `warping_module.safetensors` | 173.7 MB | 45.5M | Warps appearance features via dense motion |
| `stitching_retargeting_module.safetensors` | 0.9 MB | 227K | Stitching + lip/eye retargeting networks |
| **Total** | **496.6 MB** | **130M** | |

## ⚠️ Conversion Note

Weights were converted from the official `.pth` checkpoints in [KlingTeam/LivePortrait](https://huggingface.co/KlingTeam/LivePortrait) using:

```python
from safetensors.torch import save_file
import torch

# For base models (direct state dict)
sd = torch.load("model.pth", map_location="cpu")
save_file(sd, "model.safetensors")

# For stitching module (nested sub-dicts → flat prefixed keys)
# retarget_shoulder → stitching.*
# retarget_mouth   → retarget_lip.*
# retarget_eye     → retarget_eye.*
```

The stitching/retargeting module uses flat prefixed keys in safetensors format. The `module.` DDP prefix is stripped from all keys.

## Usage

These models are automatically downloaded by ComfyUI-FFMPEGA when the **Animate Portrait** skill is used. No manual setup required if `allow_model_downloads` is enabled.

### Manual Installation

1. Download all `.safetensors` files from this repo
2. Place them in `ComfyUI/models/liveportrait/`

## Why This Mirror?

- **Pickle-free**: safetensors format eliminates arbitrary code execution risks
- **Supply chain resilience**: first-party mirror ensures availability
- **Faster loading**: safetensors loads faster than pickle-based `.pth`

## License

LivePortrait is released under the **MIT License** — see [LICENSE.md](LICENSE.md). See the [original repository](https://github.com/KlingAIResearch/LivePortrait) for details.

## Credits

- **Original model by**: [Kling AI Research](https://github.com/KlingAIResearch/LivePortrait)
- **Paper**: *LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control* ([arXiv:2407.03168](https://arxiv.org/abs/2407.03168))
- **Redistributed by**: [Æmotion Studio](https://huggingface.co/AEmotionStudio) for use with [ComfyUI-FFMPEGA](https://github.com/AEmotionStudio/ComfyUI-FFMPEGA)