LivePortrait / README.md
qqc1989's picture
Update README.md
8754b65 verified
|
raw
history blame
4.3 kB
metadata
license: mit
language:
  - en
base_model:
  - KwaiVGI/LivePortrait
pipeline_tag: image-to-video

showcase

LivePortrait

This version of LivePortrait has been converted to run on the Axera NPU using w8a16 quantization.

This model has been optimized with the following:

Compatible with Pulsar2 version: 3.4

Convert tools links:

For those who are interested in model conversion:

Support Platform

How to use

Download all files from this repository to the device.

(py310) axera@dell:~/samples/LivePortrait$ tree -L 2
.
β”œβ”€β”€ assets
β”‚   └── examples
β”œβ”€β”€ config.json
β”œβ”€β”€ python
β”‚   β”œβ”€β”€ axmodels
β”‚   β”œβ”€β”€ cropper.py
β”‚   β”œβ”€β”€ infer_onnx.py
β”‚   β”œβ”€β”€ infer.py
β”‚   β”œβ”€β”€ pretrained_weights
β”‚   β”œβ”€β”€ requirements.txt
β”‚   └── utils
└── README.md

7 directories, 6 files

python env requirement

pyaxengine

https://github.com/AXERA-TECH/pyaxengine

wget https://github.com/AXERA-TECH/pyaxengine/releases/download/0.1.3.rc1/axengine-0.1.3-py3-none-any.whl
pip install axengine-0.1.3-py3-none-any.whl

others

pip install -r python/requirements.txt

Inference with AX650 or AX8850 Host, such as AX650 DEMO BOARD, M4N-DOCK(爱芯派Pro)

TODO

Inference with M.2 Accelerator card

What is M.2 Accelerator card?, Show this DEMO based on x86.

Image

(py310) axera@dell:~/samples/LivePortrait$ python ./python/infer.py --source ./assets/examples/source/s0.jpg --driving ./assets/examples/driving/d8.jpg --models ./python/axmodels/ --output-dir ./axmodel_infer
[INFO] Available providers:  ['AXCLRTExecutionProvider']
[INFO] Using provider: AXCLRTExecutionProvider
[INFO] SOC Name: AX650N
[INFO] VNPU type: VNPUType.DISABLED
[INFO] Compiler version: 3.3 144960ad
[INFO] Using provider: AXCLRTExecutionProvider
[INFO] SOC Name: AX650N
[INFO] VNPU type: VNPUType.DISABLED
[INFO] Compiler version: 3.3 144960ad
[INFO] Using provider: AXCLRTExecutionProvider
[INFO] SOC Name: AX650N
[INFO] VNPU type: VNPUType.DISABLED
[INFO] Compiler version: 3.3 0f7260e8
[INFO] Using provider: AXCLRTExecutionProvider
[INFO] SOC Name: AX650N
[INFO] VNPU type: VNPUType.DISABLED
[INFO] Compiler version: 3.3 144960ad
FaceAnalysisDIY warmup time: 0.024s
[20:02:20] LandmarkRunner warmup time: 0.031s    human_landmark_runner.py:95
2025-05-29 20:02:20.727 | INFO     | __main__:main:727 - Start making driving motion template...
2025-05-29 20:02:20.972 | INFO     | __main__:main:747 - Prepared pasteback mask done.
2025-05-29 20:02:21.449 | INFO     | __main__:main:787 - The output of image-driven portrait animation is an image.
2025-05-29 20:02:25.475 | DEBUG    | __main__:warp_decode:647 - warp time: 4.017s
2025-05-29 20:02:25.892 | INFO     | __main__:main:881 - Animated image: ./axmodel_infer/s0--d8.jpg
2025-05-29 20:02:25.892 | INFO     | __main__:main:882 - Animated image with concat: ./axmodel_infer/s0--d8_concat.jpg
2025-05-29 20:02:25.904 | DEBUG    | __main__:<module>:894 - LivePortrait axmodel infer time: 8.165s
(py310) axera@dell:~/samples/LivePortrait$

Here, --models specifies the storage path for the *.axmodel model.

The output of axmodel-infer is as follows:

output_concat output

Video

python3 ./python/infer.py --source ./assets/examples/source/s0.jpg --driving ./assets/examples/driving/d0.mp4 --models ./python/axmodels/ --output-dir ./axmodel_infer

The output of axmodel-infer is as follows:

s0--d0_concat_axmodel

https://github.com/user-attachments/assets/2c2a1380-a686-498d-bab7-1cf4f990edaf

s0--d0_axmodel

https://github.com/user-attachments/assets/873cec20-0e85-44df-9811-cd3a2aaa7730