Update README.md
Browse files
README.md
CHANGED
|
@@ -5,13 +5,29 @@ language:
|
|
| 5 |
base_model:
|
| 6 |
- lmsys/vicuna-7b-v1.5
|
| 7 |
---
|
| 8 |
-
# Face-LLaVA: Facial Expression and Attribute Understanding through Instruction Tuning
|
| 9 |
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15 |
|
| 16 |
This is the official released weights of of the **WACV 2026 Round 1** Early Accept paper (6.4% acceptance rate) - Face-LLaVA: Facial Expression and Attribute Understanding through Instruction Tuning. Please refer to the [official github repository](https://github.com/ihp-lab/face-llava) for instructions to run inference.
|
| 17 |
|
|
@@ -127,7 +143,11 @@ The human face plays a central role in social communication, necessitating the u
|
|
| 127 |
|
| 128 |
## ⚖️ License
|
| 129 |
|
| 130 |
-
This
|
|
|
|
|
|
|
|
|
|
|
|
|
| 131 |
|
| 132 |
## 🪶 Citation
|
| 133 |
|
|
|
|
| 5 |
base_model:
|
| 6 |
- lmsys/vicuna-7b-v1.5
|
| 7 |
---
|
|
|
|
| 8 |
|
| 9 |
+
<div align="center">
|
| 10 |
+
<img src="./facellava_logo.png" width="300">
|
| 11 |
+
|
| 12 |
+
<h1>Facial Expression and Attribute Understanding through Instruction Tuning</h1>
|
| 13 |
+
<h3>WACV 2026</h3>
|
| 14 |
+
|
| 15 |
+
<p>
|
| 16 |
+
<a href="https://arxiv.org/abs/2504.07198">
|
| 17 |
+
<img src="https://img.shields.io/badge/arXiv-2504.07198-b31b1b.svg" alt="arXiv">
|
| 18 |
+
</a>
|
| 19 |
+
<a href="https://huggingface.co/chaubeyG/FaceLLaVA">
|
| 20 |
+
<img src="https://img.shields.io/badge/%F0%9F%A4%97%20Weights-FaceLLaVA-orange" alt="Model Weights">
|
| 21 |
+
</a>
|
| 22 |
+
<a href="LICENSE.rst">
|
| 23 |
+
<img src="https://img.shields.io/badge/license-USC%20Research-green" alt="License">
|
| 24 |
+
</a>
|
| 25 |
+
<a href="https://www.python.org/">
|
| 26 |
+
<img src="https://img.shields.io/badge/Python-3.10+-blue.svg" alt="Python Version">
|
| 27 |
+
</a>
|
| 28 |
+
</p>
|
| 29 |
+
<br>
|
| 30 |
+
</div>
|
| 31 |
|
| 32 |
This is the official released weights of of the **WACV 2026 Round 1** Early Accept paper (6.4% acceptance rate) - Face-LLaVA: Facial Expression and Attribute Understanding through Instruction Tuning. Please refer to the [official github repository](https://github.com/ihp-lab/face-llava) for instructions to run inference.
|
| 33 |
|
|
|
|
| 143 |
|
| 144 |
## ⚖️ License
|
| 145 |
|
| 146 |
+
This codebase is distributed under the USC Research license. See [LICENSE.rst](LICENSE.rst) for more details.
|
| 147 |
+
|
| 148 |
+
## 🙌 Credits
|
| 149 |
+
|
| 150 |
+
This codebase builds upon the following excellent works: [VideoLLaVA](https://github.com/PKU-YuanGroup/Video-LLaVA), [LLaVA](https://github.com/haotian-liu/LLaVA) and [LLaVA-Next](https://github.com/LLaVA-VL/LLaVA-NeXT). We gratefully acknowledge their contributions to the open-source community.
|
| 151 |
|
| 152 |
## 🪶 Citation
|
| 153 |
|