| --- |
| license: mit |
| license_link: LICENSE.md |
| tags: |
| - liveportrait |
| - face-animation |
| - image-to-video |
| - portrait |
| - safetensors |
| - comfyui |
| - ffmpega |
| pipeline_tag: image-to-video |
| --- |
| |
| # LivePortrait Models (safetensors) |
|
|
| Mirror of the [LivePortrait](https://github.com/KlingAIResearch/LivePortrait) model weights, converted from `.pth` to **safetensors** format for safe, pickle-free loading. Hosted by [Æmotion Studio](https://github.com/AEmotionStudio) for use with [ComfyUI-FFMPEGA](https://github.com/AEmotionStudio/ComfyUI-FFMPEGA). |
|
|
| ## What is LivePortrait? |
|
|
| LivePortrait is an AI model for **portrait animation** — it transfers head pose, facial expressions, eye gaze, and lip movements from a driving video onto a source face image. It produces high-quality results in real-time. |
|
|
| **Paper:** [LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control](https://arxiv.org/abs/2407.03168) |
|
|
| ## Files |
|
|
| | File | Size | Params | Description | |
| |:-----|-----:|-------:|:------------| |
| | `appearance_feature_extractor.safetensors` | 3.2 MB | 836K | Extracts 3D appearance features from face crops | |
| | `motion_extractor.safetensors` | 107.3 MB | 28.1M | Extracts keypoints, head pose, and expressions | |
| | `spade_generator.safetensors` | 211.5 MB | 55.4M | SPADE decoder — generates output face images | |
| | `warping_module.safetensors` | 173.7 MB | 45.5M | Warps appearance features via dense motion | |
| | `stitching_retargeting_module.safetensors` | 0.9 MB | 227K | Stitching + lip/eye retargeting networks | |
| | **Total** | **496.6 MB** | **130M** | | |
|
|
| ## ⚠️ Conversion Note |
|
|
| Weights were converted from the official `.pth` checkpoints in [KlingTeam/LivePortrait](https://huggingface.co/KlingTeam/LivePortrait) using: |
|
|
| ```python |
| from safetensors.torch import save_file |
| import torch |
| |
| # For base models (direct state dict) |
| sd = torch.load("model.pth", map_location="cpu") |
| save_file(sd, "model.safetensors") |
| |
| # For stitching module (nested sub-dicts → flat prefixed keys) |
| # retarget_shoulder → stitching.* |
| # retarget_mouth → retarget_lip.* |
| # retarget_eye → retarget_eye.* |
| ``` |
|
|
| The stitching/retargeting module uses flat prefixed keys in safetensors format. The `module.` DDP prefix is stripped from all keys. |
|
|
| ## Usage |
|
|
| These models are automatically downloaded by ComfyUI-FFMPEGA when the **Animate Portrait** skill is used. No manual setup required if `allow_model_downloads` is enabled. |
|
|
| ### Manual Installation |
|
|
| 1. Download all `.safetensors` files from this repo |
| 2. Place them in `ComfyUI/models/liveportrait/` |
|
|
| ## Why This Mirror? |
|
|
| - **Pickle-free**: safetensors format eliminates arbitrary code execution risks |
| - **Supply chain resilience**: first-party mirror ensures availability |
| - **Faster loading**: safetensors loads faster than pickle-based `.pth` |
|
|
| ## License |
|
|
| LivePortrait is released under the **MIT License** — see [LICENSE.md](LICENSE.md). See the [original repository](https://github.com/KlingAIResearch/LivePortrait) for details. |
|
|
| ## Credits |
|
|
| - **Original model by**: [Kling AI Research](https://github.com/KlingAIResearch/LivePortrait) |
| - **Paper**: *LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control* ([arXiv:2407.03168](https://arxiv.org/abs/2407.03168)) |
| - **Redistributed by**: [Æmotion Studio](https://huggingface.co/AEmotionStudio) for use with [ComfyUI-FFMPEGA](https://github.com/AEmotionStudio/ComfyUI-FFMPEGA) |
|
|