MER2023 / README.md
MERChallenge's picture
Add Baidu Netdisk link
99e83a3 verified
---
license: cc-by-nc-4.0
viewer: false
extra_gated_prompt: >-
Explicit consent is given from Intelligent Interaction Group for the academic
research. The rights to the annotation of MER dataset belong to Intelligent
Interaction Group. No legal claims of any kind can be derived from accepting
and using the database. Intelligent Interaction Group is not liable for any
damage resulting from receiving, or using the database or any other files
provided by Intelligent Interaction Group. The licensee is not permitted to
hand over the database or any other files containing information derived from
it (such as labelling files) to third parties, nor may they modify the
database without obtaining expressed written consent from Intelligent
Interaction Group.
extra_gated_fields:
Organization: text
Country: country
Specific date: date_picker
I want to use this dataset for:
type: select
options:
- Research
- Education
- label: Other
value: other
I agree to use this dataset for non-commercial use ONLY: checkbox
language:
- en
---
![MER2023 Banner](MER2023.png)
# 🧠 MER2023: Multimodal Emotion Recognition Challenge
## 🎯 Introduction
Multimodal emotion recognition has become a vital research area due to its widespread applications in human-computer interaction. With the rise of deep learning, the field has made significant progress over recent decades. However, several challenges still hinder its deployment in real-world scenarios:
1. πŸ§ͺ **Labeling is costly**: Annotating large-scale datasets is labor-intensive and expensive.
2. πŸ“Ά **Modality degradation**: In real environments, background noise, poor lighting, or network-induced blur can severely degrade input modalities.
To tackle these challenges and promote robust, scalable research, we organized the **MER 2023 Challenge**. This competition encourages the development of **innovative and practical multimodal emotion recognition** technologies.
---
## 🏁 Tracks
### πŸ”Ή Track 1: Multi-label Learning (MER-MULTI)
Predict **both discrete and dimensional** emotions from multimodal inputs. We encourage methods that model **multi-label relationships**.
πŸ”— Related work: Wang *et al.*, 2022 [1]
---
### πŸ”Ή Track 2: Modality Robustness (MER-NOISE)
Evaluate your system’s **robustness to corrupted inputs**, including noisy audio and blurred visuals.
πŸ”— Related work: Hazarika *et al.*, 2022 [2]; Zhang *et al.*, 2022 [3]; Lian *et al.*, 2023 [4]
---
### πŸ”Ή Track 3: Semi-Supervised Learning (MER-SEMI)
Leverage **unlabeled video samples** with semi-supervised methods such as masked autoencoders.
πŸ”— Related work: He *et al.*, 2022 [5]
---
## πŸ“¦ Dataset Overview
The MER2023 dataset extends the CHEVAD dataset and introduces automatic unlabeled data collection and refined sample filtering for better reliability.
- **Reliable samples** are split into `Train&Val`, `MER-MULTI`, and `MER-NOISE`.
- **Unreliable & unlabeled samples** form the `MER-SEMI` set.
### πŸ“Š Dataset Statistics
| Partition | # Labeled Samples | # Unlabeled Samples | Duration |
|-------------|-------------------|---------------------|--------------|
| Train&Val | 3373 | 0 | 03:45:47 |
| MER-MULTI | 411 | 0 | 00:28:09 |
| MER-NOISE | 412 | 0 | 00:26:23 |
| MER-SEMI | 834 | 73148 | 67:41:24 |
---
## πŸ—“οΈ Schedule
- πŸ“‚ **April 30, 2023** – Data & baseline released
- πŸ“¦ **July 1, 2023** – Evaluation datasets released
- πŸ§ͺ **July 6, 2023** – Results submission deadline
- πŸ“ **July 14, 2023** – Paper submission deadline
- βœ… **July 30, 2023** – Notification of acceptance
- πŸ–‹οΈ **August 6, 2023** – Camera-ready submission
πŸ•› All deadlines follow **23:59 Anywhere on Earth (AoE)**.
<!-- ---
## πŸ“ Evaluation Metrics
Let the labeled dataset be defined as:
**π’Ÿβ‚— = {(π‘₯α΅’, 𝑒ᡒ, 𝑣ᡒ)}α΅’=1α΄ΊΛ‘**
where **𝑒ᡒ ∈ {1, 2, ..., C}** denotes the **discrete emotion category**, and **𝑣ᡒ ∈ [βˆ’5, 5]** denotes the **valence score**.
### πŸ”΅ Discrete Emotion (Classification)
To address class imbalance, we adopt the **weighted average F1-score**.
The F1-score for class *j* is computed as:
**F<sub>j</sub> = (2 Γ— Precision<sub>j</sub> Γ— Recall<sub>j</sub>) / (Precision<sub>j</sub> + Recall<sub>j</sub>)**
The weighted metric across all classes is:
**metric<sub>e</sub> = (Ξ£<sub>j=1</sub><sup>C</sup> n<sub>j</sub> Γ— F<sub>j</sub>) / (Ξ£<sub>j=1</sub><sup>C</sup> n<sub>j</sub>)**
where **n<sub>j</sub>** is the number of samples in class *j*.
### 🟣 Dimensional Emotion (Regression)
We use the **Mean Squared Error (MSE)** as the regression metric:
**metric<sub>v</sub> = (1 / N<sub>l</sub>) Γ— Ξ£<sub>i=1</sub><sup>N<sub>l</sub></sup> (v<sub>i</sub> βˆ’ vΜ‚<sub>i</sub>)Β²**
### 🟒 Combined Metric
For sub-challenges involving both discrete and dimensional emotions (MER-MULTI and MER-NOISE), we define a combined metric:
**metric = metric<sub>e</sub> βˆ’ 0.25 Γ— metric<sub>v</sub>**
In MER-SEMI, since only discrete labels are available for evaluation, we use **metric<sub>e</sub>** alone.
-->
---
## πŸ“š References
[1] Wang et al. (2022). *Multi-label GCN for dynamic facial expression recognition*.
[2] Hazarika et al. (2022). *Modality robustness in sentiment analysis*.
[3] Zhang et al. (2022). *Deep Partial Multi-view Learning*. IEEE TPAMI.
[4] Lian et al. (2023). *Graph Completion Network*. IEEE TPAMI.
[5] He et al. (2022). *Masked Autoencoders*. CVPR.
---
## πŸ” Decryption Password (Visible After Approval)
> ⚠️ The dataset files are compressed and protected with a password.
After your access request has been approved, the password will be provided in the file:
**`README_AFTER_APPROVAL.md`**
This file also contains an alternative **Baidu Netdisk** download link for your convenience.
---
## πŸ“« Contact
For questions or collaboration, feel free to reach out to the organizers via email or raise an issue in this repository.
> merchallenge.contact@gmail.com
---
Β© 2023 Intelligent Interaction Group. All rights reserved.
Licensed under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/)