--- license: apache-2.0 tags: - portrait-animation - real-time - diffusion pipeline_tag: image-to-video library_name: diffusers ---

PersonaLive!

Expressive Portrait Image Animation for Live Streaming

[![GitHub](https://img.shields.io/github/stars/GVCLab/PersonaLive?style=social)](https://github.com/GVCLab/PersonaLive) [Zhiyuan Li1,2,3](https://huai-chang.github.io/) ยท [Chi-Man Pun1,๐Ÿ“ช](https://cmpun.github.io/) ยท [Chen Fang2](http://fangchen.org/) ยท [Jue Wang2](https://scholar.google.com/citations?user=Bt4uDWMAAAAJ&hl=en) ยท [Xiaodong Cun3,๐Ÿ“ช](https://vinthony.github.io/academic/) 1 University of Macau    2 [Dzine.ai](https://www.dzine.ai/)    3 [GVC Lab, Great Bay University](https://gvclab.github.io/)

โšก๏ธ Real-time, Streamable, Infinite-Length โšก๏ธ
โšก๏ธ Portrait Animation requires only ~12GB VRAM โšก๏ธ

## ๐Ÿ“‹ TODO - [ ] If you find PersonaLive useful or interesting, please give us a Star ๐ŸŒŸ on our [GitHub repo](https://github.com/GVCLab/PersonaLive)! Your support drives us to keep improving. ๐Ÿป - [ ] Fix bugs (If you encounter any issues, please feel free to open an issue or contact me! ๐Ÿ™) - [ ] Enhance WebUI (Support reference image replacement - [x] **[2025.12.22]** ๐Ÿ”ฅ Supported streaming strategy in offline inference to generate long videos on 12GB VRAM! - [x] **[2025.12.17]** ๐Ÿ”ฅ [ComfyUI-PersonaLive](https://github.com/okdalto/ComfyUI-PersonaLive) is now supported! (Thanks to [@okdalto](https://github.com/okdalto)) - [x] **[2025.12.15]** ๐Ÿ”ฅ Release `paper`! - [x] **[2025.12.12]** ๐Ÿ”ฅ Release `inference code`, `config`, and `pretrained weights`! ## โš™๏ธ Framework Image 1 We present PersonaLive, a `real-time` and `streamable` diffusion framework capable of generating `infinite-length` portrait animations on a single `12GB GPU`. ## ๐Ÿš€ Getting Started ### ๐Ÿ›  Installation ``` # clone this repo git clone https://github.com/GVCLab/PersonaLive cd PersonaLive # Create conda environment conda create -n personalive python=3.10 conda activate personalive # Install packages with pip pip install -r requirements_base.txt ``` ### โฌ Download weights Option 1: Download pre-trained weights of base models and other components ([sd-image-variations-diffusers](https://huggingface.co/lambdalabs/sd-image-variations-diffusers) and [sd-vae-ft-mse](https://huggingface.co/stabilityai/sd-vae-ft-mse)). You can run the following command to download weights automatically: ```bash python tools/download_weights.py ``` Option 2: Download pre-trained weights into the `./pretrained_weights` folder from one of the below URLs: Finally, these weights should be organized as follows: ``` pretrained_weights โ”œโ”€โ”€ onnx โ”‚ โ”œโ”€โ”€ unet_opt โ”‚ โ”‚ โ”œโ”€โ”€ unet_opt.onnx โ”‚ โ”‚ โ””โ”€โ”€ unet_opt.onnx.data โ”‚ โ””โ”€โ”€ unet โ”œโ”€โ”€ personalive โ”‚ โ”œโ”€โ”€ denoising_unet.pth โ”‚ โ”œโ”€โ”€ motion_encoder.pth โ”‚ โ”œโ”€โ”€ motion_extractor.pth โ”‚ โ”œโ”€โ”€ pose_guider.pth โ”‚ โ”œโ”€โ”€ reference_unet.pth โ”‚ โ””โ”€โ”€ temporal_module.pth โ”œโ”€โ”€ sd-vae-ft-mse โ”‚ โ”œโ”€โ”€ diffusion_pytorch_model.bin โ”‚ โ””โ”€โ”€ config.json โ”œโ”€โ”€ sd-image-variations-diffusers โ”‚ โ”œโ”€โ”€ image_encoder โ”‚ โ”‚ โ”œโ”€โ”€ pytorch_model.bin โ”‚ โ”‚ โ””โ”€โ”€ config.json โ”‚ โ”œโ”€โ”€ unet โ”‚ โ”‚ โ”œโ”€โ”€ diffusion_pytorch_model.bin โ”‚ โ”‚ โ””โ”€โ”€ config.json โ”‚ โ””โ”€โ”€ model_index.json โ””โ”€โ”€ tensorrt โ””โ”€โ”€ unet_work.engine ``` ### ๐ŸŽž๏ธ Offline Inference ``` python inference_offline.py ``` โš ๏ธ Note for RTX 50-Series (Blackwell) Users: xformers is not yet fully compatible with the new architecture. To avoid crashes, please disable it by running: ``` python inference_offline.py --use_xformers False ``` ### ๐Ÿ“ธ Online Inference #### ๐Ÿ“ฆ Setup Web UI ``` # install Node.js 18+ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.1/install.sh | bash nvm install 18 cd webcam source start.sh ``` #### ๐ŸŽ๏ธ Acceleration (Optional) Converting the model to TensorRT can significantly speed up inference (~ 2x โšก๏ธ). Building the engine may take about `20 minutes` depending on your device. Note that TensorRT optimizations may lead to slight variations or a small drop in output quality. ``` pip install -r requirements_trt.txt python torch2trt.py ``` *The provided TensorRT model is from an `H100`. We recommend `ALL users` (including H100 users) re-run `python torch2trt.py` locally to ensure best compatibility.* #### โ–ถ๏ธ Start Streaming ``` python inference_online.py --acceleration none (for RTX 50-Series) or xformers or tensorrt ``` Then open `http://0.0.0.0:7860` in your browser. (*If `http://0.0.0.0:7860` does not work well, try `http://localhost:7860`) **How to use**: Upload Image โžก๏ธ Fuse Reference โžก๏ธ Start Animation โžก๏ธ Enjoy! ๐ŸŽ‰
PersonaLive
**Regarding Latency**: Latency varies depending on your device's computing power. You can try the following methods to optimize it: 1. Lower the "Driving FPS" setting in the WebUI to reduce the computational workload. 2. You can increase the multiplier (e.g., set to `num_frames_needed * 4` or higher) to better match your device's inference speed. https://github.com/GVCLab/PersonaLive/blob/6953d1a8b409f360a3ee1d7325093622b29f1e22/webcam/util.py#L73 ## ๐Ÿ“š Community Contribution Special thanks to the community for providing helpful setups! ๐Ÿฅ‚ * **Windows + RTX 50-Series Guide**: Thanks to [@dknos](https://github.com/dknos) for providing a [detailed guide](https://github.com/GVCLab/PersonaLive/issues/10#issuecomment-3662785532) on running this project on Windows with Blackwell GPUs. * **TensorRT on Windows**: If you are trying to convert TensorRT models on Windows, [this discussion](https://github.com/GVCLab/PersonaLive/issues/8) might be helpful. Special thanks to [@MaraScott](https://github.com/MaraScott) and [@Jeremy8776](https://github.com/Jeremy8776) for their insights. * **ComfyUI**: Thanks to [@okdalto](https://github.com/okdalto) for helping implement the [ComfyUI-PersonaLive](https://github.com/okdalto/ComfyUI-PersonaLive) support. * **Useful Scripts**: Thanks to [@suruoxi](https://github.com/suruoxi) for implementing `download_weights.py`, and to [@andchir](https://github.com/andchir) for adding audio merging functionality. ## ๐ŸŽฌ More Results #### ๐Ÿ‘€ Visualization results
#### ๐Ÿคบ Comparisons
## โญ Citation If you find PersonaLive useful for your research, welcome to cite our work using the following BibTeX: ```bibtex @article{li2025personalive, title={PersonaLive! Expressive Portrait Image Animation for Live Streaming}, author={Li, Zhiyuan and Pun, Chi-Man and Fang, Chen and Wang, Jue and Cun, Xiaodong}, journal={arXiv preprint arXiv:2512.11253}, year={2025} } ``` ## โค๏ธ Acknowledgement This code is mainly built upon [Moore-AnimateAnyone](https://github.com/MooreThreads/Moore-AnimateAnyone), [X-NeMo](https://byteaigc.github.io/X-Portrait2/), [StreamDiffusion](https://github.com/cumulo-autumn/StreamDiffusion), [RAIN](https://pscgylotti.github.io/pages/RAIN/) and [LivePortrait](https://github.com/KlingTeam/LivePortrait), thanks to their invaluable contributions.