Spaces:
Runtime error
Runtime error
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,142 +1,156 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
```bash
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
|
| 89 |
-
|
| 90 |
-
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
|
| 107 |
-
|
| 108 |
-
|
| 109 |
-
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
|
| 113 |
-
|
| 114 |
-
|
| 115 |
-
|
| 116 |
-
|
| 117 |
-
|
| 118 |
-
|
| 119 |
-
|
| 120 |
-
|
| 121 |
-
|
| 122 |
-
|
| 123 |
-
|
| 124 |
-
|
| 125 |
-
|
| 126 |
-
|
| 127 |
-
|
| 128 |
-
|
| 129 |
-
|
| 130 |
-
|
| 131 |
-
|
| 132 |
-
|
| 133 |
-
|
| 134 |
-
|
| 135 |
-
```
|
| 136 |
-
|
| 137 |
-
|
| 138 |
-
|
| 139 |
-
|
| 140 |
-
|
| 141 |
-
|
| 142 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
title: Liveportrait_video
|
| 3 |
+
emoji: π»
|
| 4 |
+
colorFrom: blue
|
| 5 |
+
colorTo: indigo
|
| 6 |
+
sdk: gradio
|
| 7 |
+
sdk_version: 4.38.1
|
| 8 |
+
app_file: app.py
|
| 9 |
+
pinned: false
|
| 10 |
+
license: apache-2.0
|
| 11 |
+
---
|
| 12 |
+
|
| 13 |
+
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
|
| 14 |
+
|
| 15 |
+
<h1 align="center">This is the modification of LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control for allowing video as a source </h1>
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
<br>
|
| 19 |
+
<div align="center">
|
| 20 |
+
<!-- <a href='LICENSE'><img src='https://img.shields.io/badge/license-MIT-yellow'></a> -->
|
| 21 |
+
<a href='https://arxiv.org/pdf/2407.03168'><img src='https://img.shields.io/badge/arXiv-LivePortrait-red'></a>
|
| 22 |
+
<a href='https://liveportrait.github.io'><img src='https://img.shields.io/badge/Project-LivePortrait-green'></a>
|
| 23 |
+
<a href ='https://github.com/KwaiVGI/LivePortrait'>Official Liveportrait</a>
|
| 24 |
+
</div>
|
| 25 |
+
<br>
|
| 26 |
+
|
| 27 |
+
<p align="center">
|
| 28 |
+
<img src="./assets/docs/showcase2.gif" alt="showcase">
|
| 29 |
+
<br>
|
| 30 |
+
π₯ For more results, visit LivePortrait <a href="https://liveportrait.github.io/"><strong>homepage</strong></a> π₯
|
| 31 |
+
</p>
|
| 32 |
+
|
| 33 |
+
|
| 34 |
+
## π₯ Getting Started
|
| 35 |
+
### 1. Clone the code and prepare the environment
|
| 36 |
+
```bash
|
| 37 |
+
git clone https://github.com/KwaiVGI/LivePortrait
|
| 38 |
+
cd LivePortrait
|
| 39 |
+
|
| 40 |
+
# create env using conda
|
| 41 |
+
conda create -n LivePortrait python==3.9.18
|
| 42 |
+
conda activate LivePortrait
|
| 43 |
+
# install dependencies with pip
|
| 44 |
+
pip install -r requirements.txt
|
| 45 |
+
```
|
| 46 |
+
|
| 47 |
+
### 2. Download pretrained weights
|
| 48 |
+
|
| 49 |
+
Download the pretrained weights from HuggingFace:
|
| 50 |
+
```bash
|
| 51 |
+
# you may need to run `git lfs install` first
|
| 52 |
+
git clone https://huggingface.co/KwaiVGI/liveportrait pretrained_weights
|
| 53 |
+
```
|
| 54 |
+
|
| 55 |
+
Or, download all pretrained weights from [Google Drive](https://drive.google.com/drive/folders/1UtKgzKjFAOmZkhNK-OYT0caJ_w2XAnib) or [Baidu Yun](https://pan.baidu.com/s/1MGctWmNla_vZxDbEp2Dtzw?pwd=z5cn). We have packed all weights in one directory π. Unzip and place them in `./pretrained_weights` ensuring the directory structure is as follows:
|
| 56 |
+
```text
|
| 57 |
+
pretrained_weights
|
| 58 |
+
βββ insightface
|
| 59 |
+
β βββ models
|
| 60 |
+
β βββ buffalo_l
|
| 61 |
+
β βββ 2d106det.onnx
|
| 62 |
+
β βββ det_10g.onnx
|
| 63 |
+
βββ liveportrait
|
| 64 |
+
βββ base_models
|
| 65 |
+
β βββ appearance_feature_extractor.pth
|
| 66 |
+
β βββ motion_extractor.pth
|
| 67 |
+
β βββ spade_generator.pth
|
| 68 |
+
β βββ warping_module.pth
|
| 69 |
+
βββ landmark.onnx
|
| 70 |
+
βββ retargeting_models
|
| 71 |
+
βββ stitching_retargeting_module.pth
|
| 72 |
+
```
|
| 73 |
+
|
| 74 |
+
### 3. Inference π
|
| 75 |
+
|
| 76 |
+
#### Fast hands-on
|
| 77 |
+
```bash
|
| 78 |
+
python inference.py
|
| 79 |
+
```
|
| 80 |
+
|
| 81 |
+
If the script runs successfully, you will get an output mp4 file named `animations/s6--d0_concat.mp4`. This file includes the following results: driving video, input image, and generated result.
|
| 82 |
+
|
| 83 |
+
<p align="center">
|
| 84 |
+
<img src="./assets/docs/inference.gif" alt="image">
|
| 85 |
+
</p>
|
| 86 |
+
|
| 87 |
+
Or, you can change the input by specifying the `-s` and `-d` arguments:
|
| 88 |
+
|
| 89 |
+
```bash
|
| 90 |
+
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d0.mp4
|
| 91 |
+
|
| 92 |
+
# disable pasting back to run faster
|
| 93 |
+
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d0.mp4 --no_flag_pasteback
|
| 94 |
+
|
| 95 |
+
# more options to see
|
| 96 |
+
python inference.py -h
|
| 97 |
+
```
|
| 98 |
+
|
| 99 |
+
For video: you can change the input by specifying the `-sd` and `-d` arguments:
|
| 100 |
+
|
| 101 |
+
```bash
|
| 102 |
+
python inference.py -sd assets/examples/driving/d3.mp4 -d assets/examples/driving/d0.mp4 -vd True
|
| 103 |
+
|
| 104 |
+
# disable pasting back to run faster
|
| 105 |
+
python inference.py -sd assets/examples/driving/d3.mp4 -d assets/examples/driving/d0.mp4 -vd True --no_flag_pasteback
|
| 106 |
+
|
| 107 |
+
```
|
| 108 |
+
#### Driving video auto-cropping
|
| 109 |
+
|
| 110 |
+
π To use your own driving video, we **recommend**:
|
| 111 |
+
- Crop it to a **1:1** aspect ratio (e.g., 512x512 or 256x256 pixels), or enable auto-cropping by `--flag_crop_driving_video`.
|
| 112 |
+
- Focus on the head area, similar to the example videos.
|
| 113 |
+
- Minimize shoulder movement.
|
| 114 |
+
- Make sure the first frame of driving video is a frontal face with **neutral expression**.
|
| 115 |
+
|
| 116 |
+
Below is a auto-cropping case by `--flag_crop_driving_video`:
|
| 117 |
+
```bash
|
| 118 |
+
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d13.mp4 --flag_crop_driving_video
|
| 119 |
+
```
|
| 120 |
+
|
| 121 |
+
If you find the results of auto-cropping is not well, you can modify the `--scale_crop_video`, `--vy_ratio_crop_video` options to adjust the scale and offset, or do it manually.
|
| 122 |
+
|
| 123 |
+
#### Template making
|
| 124 |
+
You can also use the `.pkl` file auto-generated to speed up the inference, and **protect privacy**, such as:
|
| 125 |
+
```bash
|
| 126 |
+
python inference.py -s assets/examples/source/s9.jpg -d assets/examples/driving/d5.pkl
|
| 127 |
+
```
|
| 128 |
+
|
| 129 |
+
**Discover more interesting results on our [Homepage](https://liveportrait.github.io)** π
|
| 130 |
+
|
| 131 |
+
### 4. Gradio interface π€
|
| 132 |
+
|
| 133 |
+
We also provide a Gradio interface for a better experience, just run by:
|
| 134 |
+
|
| 135 |
+
```bash
|
| 136 |
+
python app.py
|
| 137 |
+
```
|
| 138 |
+
|
| 139 |
+
You can specify the `--server_port`, `--share`, `--server_name` arguments to satisfy your needs!
|
| 140 |
+
|
| 141 |
+
|
| 142 |
+
|
| 143 |
+
|
| 144 |
+
## Acknowledgements
|
| 145 |
+
We would like to thank the contributors of [FOMM](https://github.com/AliaksandrSiarohin/first-order-model), [Open Facevid2vid](https://github.com/zhanglonghao1992/One-Shot_Free-View_Neural_Talking_Head_Synthesis), [SPADE](https://github.com/NVlabs/SPADE), [InsightFace](https://github.com/deepinsight/insightface), [LivePortrait](https://github.com/KwaiVGI/LivePortrait) repositories, for their open research and contributions.
|
| 146 |
+
|
| 147 |
+
## Citation π
|
| 148 |
+
|
| 149 |
+
```bibtex
|
| 150 |
+
@article{guo2024liveportrait,
|
| 151 |
+
title = {LivePortrait: Efficient Portrait Animation with Stitching and Retargeting Control},
|
| 152 |
+
author = {Guo, Jianzhu and Zhang, Dingyun and Liu, Xiaoqiang and Zhong, Zhizhou and Zhang, Yuan and Wan, Pengfei and Zhang, Di},
|
| 153 |
+
journal = {arXiv preprint arXiv:2407.03168},
|
| 154 |
+
year = {2024}
|
| 155 |
+
}
|
| 156 |
+
```
|