Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,21 @@
|
|
| 1 |
---
|
| 2 |
license: gpl-3.0
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: gpl-3.0
|
| 3 |
---
|
| 4 |
+
|
| 5 |
+
## FacialMMT
|
| 6 |
+
|
| 7 |
+
This repo contains the data and pretrained models for FacialMMT, a framework that uses facial sequences of real speaker to help multimodal emotion recognition.
|
| 8 |
+
|
| 9 |
+
The model performance on MELD test set is:
|
| 10 |
+
|
| 11 |
+
| Release | W-F1(%) |
|
| 12 |
+
|:-------------:|:--------------:|
|
| 13 |
+
| 07-10-23 | 66.73 |
|
| 14 |
+
|
| 15 |
+
It is currently ranked third on [paperswithcode](https://paperswithcode.com/sota/emotion-recognition-in-conversation-on-meld?p=a-facial-expression-aware-multimodal-multi).
|
| 16 |
+
|
| 17 |
+
If you're interested, please check out this [repo](https://github.com/NUSTM/FacialMMT) for more in-detail explanation of how to use our model.
|
| 18 |
+
|
| 19 |
+
Paper: [A Facial Expression-Aware Multimodal Multi-task Learning Framework for Emotion Recognition in Multi-party Conversations](https://aclanthology.org/2023.acl-long.861.pdf). In Proceedings of ACL 2023 (Main Conference), pp. 15445–15459.
|
| 20 |
+
|
| 21 |
+
Authors: Wenjie Zheng, Jianfei Yu, Rui Xia, and Shijin Wang
|