|
|
--- |
|
|
license: cc-by-nc-4.0 |
|
|
viewer: false |
|
|
extra_gated_prompt: >- |
|
|
Explicit consent is given from Intelligent Interaction Group for the academic |
|
|
research. The rights to the annotation of MER dataset belong to Intelligent |
|
|
Interaction Group. No legal claims of any kind can be derived from accepting |
|
|
and using the database. Intelligent Interaction Group is not liable for any |
|
|
damage resulting from receiving, or using the database or any other files |
|
|
provided by Intelligent Interaction Group. The licensee is not permitted to |
|
|
hand over the database or any other files containing information derived from |
|
|
it (such as labelling files) to third parties, nor may they modify the |
|
|
database without obtaining expressed written consent from Intelligent |
|
|
Interaction Group. |
|
|
extra_gated_fields: |
|
|
Organization: text |
|
|
Country: country |
|
|
Specific date: date_picker |
|
|
I want to use this dataset for: |
|
|
type: select |
|
|
options: |
|
|
- Research |
|
|
- Education |
|
|
- label: Other |
|
|
value: other |
|
|
I agree to use this dataset for non-commercial use ONLY: checkbox |
|
|
language: |
|
|
- en |
|
|
|
|
|
--- |
|
|
|
|
|
 |
|
|
|
|
|
# π§ MER2023: Multimodal Emotion Recognition Challenge |
|
|
|
|
|
## π― Introduction |
|
|
Multimodal emotion recognition has become a vital research area due to its widespread applications in human-computer interaction. With the rise of deep learning, the field has made significant progress over recent decades. However, several challenges still hinder its deployment in real-world scenarios: |
|
|
|
|
|
1. π§ͺ **Labeling is costly**: Annotating large-scale datasets is labor-intensive and expensive. |
|
|
2. πΆ **Modality degradation**: In real environments, background noise, poor lighting, or network-induced blur can severely degrade input modalities. |
|
|
|
|
|
To tackle these challenges and promote robust, scalable research, we organized the **MER 2023 Challenge**. This competition encourages the development of **innovative and practical multimodal emotion recognition** technologies. |
|
|
|
|
|
--- |
|
|
|
|
|
## π Tracks |
|
|
|
|
|
### πΉ Track 1: Multi-label Learning (MER-MULTI) |
|
|
Predict **both discrete and dimensional** emotions from multimodal inputs. We encourage methods that model **multi-label relationships**. |
|
|
|
|
|
π Related work: Wang *et al.*, 2022 [1] |
|
|
|
|
|
--- |
|
|
|
|
|
### πΉ Track 2: Modality Robustness (MER-NOISE) |
|
|
Evaluate your systemβs **robustness to corrupted inputs**, including noisy audio and blurred visuals. |
|
|
|
|
|
π Related work: Hazarika *et al.*, 2022 [2]; Zhang *et al.*, 2022 [3]; Lian *et al.*, 2023 [4] |
|
|
|
|
|
--- |
|
|
|
|
|
### πΉ Track 3: Semi-Supervised Learning (MER-SEMI) |
|
|
Leverage **unlabeled video samples** with semi-supervised methods such as masked autoencoders. |
|
|
|
|
|
π Related work: He *et al.*, 2022 [5] |
|
|
|
|
|
--- |
|
|
|
|
|
## π¦ Dataset Overview |
|
|
|
|
|
The MER2023 dataset extends the CHEVAD dataset and introduces automatic unlabeled data collection and refined sample filtering for better reliability. |
|
|
|
|
|
- **Reliable samples** are split into `Train&Val`, `MER-MULTI`, and `MER-NOISE`. |
|
|
- **Unreliable & unlabeled samples** form the `MER-SEMI` set. |
|
|
|
|
|
### π Dataset Statistics |
|
|
|
|
|
| Partition | # Labeled Samples | # Unlabeled Samples | Duration | |
|
|
|-------------|-------------------|---------------------|--------------| |
|
|
| Train&Val | 3373 | 0 | 03:45:47 | |
|
|
| MER-MULTI | 411 | 0 | 00:28:09 | |
|
|
| MER-NOISE | 412 | 0 | 00:26:23 | |
|
|
| MER-SEMI | 834 | 73148 | 67:41:24 | |
|
|
|
|
|
--- |
|
|
|
|
|
## ποΈ Schedule |
|
|
|
|
|
- π **April 30, 2023** β Data & baseline released |
|
|
- π¦ **July 1, 2023** β Evaluation datasets released |
|
|
- π§ͺ **July 6, 2023** β Results submission deadline |
|
|
- π **July 14, 2023** β Paper submission deadline |
|
|
- β
**July 30, 2023** β Notification of acceptance |
|
|
- ποΈ **August 6, 2023** β Camera-ready submission |
|
|
|
|
|
π All deadlines follow **23:59 Anywhere on Earth (AoE)**. |
|
|
|
|
|
<!-- --- |
|
|
|
|
|
## π Evaluation Metrics |
|
|
|
|
|
Let the labeled dataset be defined as: |
|
|
**πβ = {(π₯α΅’, πα΅’, π£α΅’)}α΅’=1α΄ΊΛ‘** |
|
|
where **πα΅’ β {1, 2, ..., C}** denotes the **discrete emotion category**, and **π£α΅’ β [β5, 5]** denotes the **valence score**. |
|
|
|
|
|
### π΅ Discrete Emotion (Classification) |
|
|
|
|
|
To address class imbalance, we adopt the **weighted average F1-score**. |
|
|
|
|
|
The F1-score for class *j* is computed as: |
|
|
**F<sub>j</sub> = (2 Γ Precision<sub>j</sub> Γ Recall<sub>j</sub>) / (Precision<sub>j</sub> + Recall<sub>j</sub>)** |
|
|
|
|
|
The weighted metric across all classes is: |
|
|
**metric<sub>e</sub> = (Ξ£<sub>j=1</sub><sup>C</sup> n<sub>j</sub> Γ F<sub>j</sub>) / (Ξ£<sub>j=1</sub><sup>C</sup> n<sub>j</sub>)** |
|
|
|
|
|
where **n<sub>j</sub>** is the number of samples in class *j*. |
|
|
|
|
|
### π£ Dimensional Emotion (Regression) |
|
|
|
|
|
We use the **Mean Squared Error (MSE)** as the regression metric: |
|
|
**metric<sub>v</sub> = (1 / N<sub>l</sub>) Γ Ξ£<sub>i=1</sub><sup>N<sub>l</sub></sup> (v<sub>i</sub> β vΜ<sub>i</sub>)Β²** |
|
|
|
|
|
### π’ Combined Metric |
|
|
|
|
|
For sub-challenges involving both discrete and dimensional emotions (MER-MULTI and MER-NOISE), we define a combined metric: |
|
|
**metric = metric<sub>e</sub> β 0.25 Γ metric<sub>v</sub>** |
|
|
|
|
|
In MER-SEMI, since only discrete labels are available for evaluation, we use **metric<sub>e</sub>** alone. |
|
|
--> |
|
|
--- |
|
|
|
|
|
## π References |
|
|
|
|
|
[1] Wang et al. (2022). *Multi-label GCN for dynamic facial expression recognition*. |
|
|
[2] Hazarika et al. (2022). *Modality robustness in sentiment analysis*. |
|
|
[3] Zhang et al. (2022). *Deep Partial Multi-view Learning*. IEEE TPAMI. |
|
|
[4] Lian et al. (2023). *Graph Completion Network*. IEEE TPAMI. |
|
|
[5] He et al. (2022). *Masked Autoencoders*. CVPR. |
|
|
|
|
|
|
|
|
--- |
|
|
|
|
|
## π Decryption Password (Visible After Approval) |
|
|
|
|
|
> β οΈ The dataset files are compressed and protected with a password. |
|
|
|
|
|
After your access request has been approved, the password will be provided in the file: |
|
|
**`README_AFTER_APPROVAL.md`** |
|
|
|
|
|
This file also contains an alternative **Baidu Netdisk** download link for your convenience. |
|
|
|
|
|
--- |
|
|
|
|
|
## π« Contact |
|
|
|
|
|
For questions or collaboration, feel free to reach out to the organizers via email or raise an issue in this repository. |
|
|
> merchallenge.contact@gmail.com |
|
|
|
|
|
--- |
|
|
|
|
|
Β© 2023 Intelligent Interaction Group. All rights reserved. |
|
|
Licensed under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) |