nielsr HF Staff commited on
Commit
54f3361
·
verified ·
1 Parent(s): b7a2734

Improve model card: Add pipeline tag, library name, paper/code/project links, and citation

Browse files

This PR improves the model card for PersonaLive! by:
- Adding `pipeline_tag: image-to-video` to the YAML metadata to correctly categorize the model and enhance discoverability on the Hugging Face Hub.
- Adding `library_name: diffusers` to the YAML metadata. This enables the automated "how to use" code snippet, as the model demonstrates compatibility with components often used in the Diffusers library (e.g., `sd-image-variations-diffusers`, `sd-vae-ft-mse`).
- Adding explicit links to the paper ([PersonaLive! Expressive Portrait Image Animation for Live Streaming](https://huggingface.co/papers/2512.11253)), the code repository ([GitHub](https://github.com/GVCLab/PersonaLive)), and the project page ([huai-chang.github.io](https://huai-chang.github.io/)) at the top of the model card content for easier access.
- Uncommenting and correctly displaying the BibTeX citation section, which was present but hidden in the original model card.

Please review and merge this PR if everything looks good.

Files changed (1) hide show
  1. README.md +18 -9
README.md CHANGED
@@ -1,9 +1,11 @@
1
  ---
2
  license: apache-2.0
3
  tags:
4
- - portrait-animation
5
- - real-time
6
- - diffusion
 
 
7
  ---
8
 
9
  <div align="center">
@@ -14,6 +16,8 @@ tags:
14
 
15
  <h2>Expressive Portrait Image Animation for Live Streaming</h2>
16
 
 
 
17
  [Zhiyuan Li<sup>1,2,3</sup>](https://huai-chang.github.io/) · [Chi-Man Pun<sup>1</sup>](https://cmpun.github.io/) 📪 · [Chen Fang<sup>2</sup>](http://fangchen.org/) · [Jue Wang<sup>2</sup>](https://scholar.google.com/citations?user=Bt4uDWMAAAAJ&hl=en) · [Xiaodong Cun<sup>3</sup>](https://vinthony.github.io/academic/) 📪
18
 
19
  <sup>1</sup> University of Macau &nbsp;&nbsp; <sup>2</sup> [Dzine.ai](https://www.dzine.ai/) &nbsp;&nbsp; <sup>3</sup> [GVC Lab, Great Bay University](https://gvclab.github.io/)
@@ -125,11 +129,16 @@ python inference_online.py
125
  ```
126
  then open `http://0.0.0.0:7860` in your browser. (*If `http://0.0.0.0:7860` does not work well, try `http://localhost:7860`)
127
 
128
- <!-- ## 📋 Citation
129
- If you find PersonaLive useful for your research, welcome to 🌟 this repo and cite our work using the following BibTeX:
130
- ```bibtex
131
-
132
- ``` -->
133
-
134
  ## ❤️ Acknowledgement
135
  This code is mainly built upon [Moore-AnimateAnyone](https://github.com/MooreThreads/Moore-AnimateAnyone), [X-NeMo](https://byteaigc.github.io/X-Portrait2/), [StreamDiffusion](https://github.com/cumulo-autumn/StreamDiffusion), [RAIN](https://pscgylotti.github.io/pages/RAIN/) and [LivePortrait](https://github.com/KlingTeam/LivePortrait), thanks to their invaluable contributions.
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  tags:
4
+ - portrait-animation
5
+ - real-time
6
+ - diffusion
7
+ pipeline_tag: image-to-video
8
+ library_name: diffusers
9
  ---
10
 
11
  <div align="center">
 
16
 
17
  <h2>Expressive Portrait Image Animation for Live Streaming</h2>
18
 
19
+ [**📚 Paper**](https://huggingface.co/papers/2512.11253) | [**💻 Code**](https://github.com/GVCLab/PersonaLive) | [**🏠 Project Page**](https://huai-chang.github.io/)
20
+
21
  [Zhiyuan Li<sup>1,2,3</sup>](https://huai-chang.github.io/) · [Chi-Man Pun<sup>1</sup>](https://cmpun.github.io/) 📪 · [Chen Fang<sup>2</sup>](http://fangchen.org/) · [Jue Wang<sup>2</sup>](https://scholar.google.com/citations?user=Bt4uDWMAAAAJ&hl=en) · [Xiaodong Cun<sup>3</sup>](https://vinthony.github.io/academic/) 📪
22
 
23
  <sup>1</sup> University of Macau &nbsp;&nbsp; <sup>2</sup> [Dzine.ai](https://www.dzine.ai/) &nbsp;&nbsp; <sup>3</sup> [GVC Lab, Great Bay University](https://gvclab.github.io/)
 
129
  ```
130
  then open `http://0.0.0.0:7860` in your browser. (*If `http://0.0.0.0:7860` does not work well, try `http://localhost:7860`)
131
 
 
 
 
 
 
 
132
  ## ❤️ Acknowledgement
133
  This code is mainly built upon [Moore-AnimateAnyone](https://github.com/MooreThreads/Moore-AnimateAnyone), [X-NeMo](https://byteaigc.github.io/X-Portrait2/), [StreamDiffusion](https://github.com/cumulo-autumn/StreamDiffusion), [RAIN](https://pscgylotti.github.io/pages/RAIN/) and [LivePortrait](https://github.com/KlingTeam/LivePortrait), thanks to their invaluable contributions.
134
+
135
+ ## ⭐ Citation
136
+ If you find PersonaLive useful for your research, welcome to 🌟 this repo and cite our work using the following BibTeX:
137
+ ```bibtex
138
+ @article{li2025personalive,
139
+ title={PersonaLive! Expressive Portrait Image Animation for Live Streaming},
140
+ author={Li, Zhiyuan and Pun, Chi-Man and Fang, Chen and Wang, Jue and Cun, Xiaodong},
141
+ journal={arXiv preprint arXiv:2512.11253},
142
+ year={2025}
143
+ }
144
+ ```