Datasets:
Jiaxing Yu
commited on
Commit
·
2e854c7
1
Parent(s):
38a2ec7
add README.md
Browse files- README.md +96 -0
- imgs/results.jpg +3 -0
README.md
ADDED
|
@@ -0,0 +1,96 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# MVEmo
|
| 2 |
+
|
| 3 |
+
This is the dataset repository for the paper: Bridging Categorical and Dimensional Affect: The MVEmo Multi-Task Benchmark for Music-Related Emotion Recognition.
|
| 4 |
+
|
| 5 |
+
## Dataset Details
|
| 6 |
+
|
| 7 |
+
### Dataset Description
|
| 8 |
+
|
| 9 |
+
MVEmo is a large-scale multimodal dataset that consists of 11,764 music video samples with both static and dynamic emotion annotations for music-related emotion recognition (MRER). It consists of the following key features:
|
| 10 |
+
|
| 11 |
+
- **Basic Information:** title, artist, genre, nationality, language, and Youtube link.
|
| 12 |
+
- **Different Modalities:** lyrics, video, and music.
|
| 13 |
+
- **Rich Emotion Annotations:** static and dynamic emotion.
|
| 14 |
+
|
| 15 |
+
### Modality Details
|
| 16 |
+
|
| 17 |
+
- **Lyrics:** The dataset contains 7,923 samples with lyrics and 3,841 samples without lyrics. We query online lyric databases using the song title and artist to retrieve official lyrics. If the lyrics are not retrievable, we employ an automated transcription method on the audio from the music videos, beginning with voice separation by Demucs, followed by multilingual speech recognition using Whisper.
|
| 18 |
+
|
| 19 |
+
- **Video:** We download the video from the Youtube link and filter out visually static videos. All remaining videos are further transcoded into the MPEG-4 format and uniformly resampled to 30 fps to ensure consistency across the entire MVEmo dataset.
|
| 20 |
+
|
| 21 |
+
- **Music:** In our dataset, we provide both the audio music and symbolic music. For symbolic music, we employ a lead sheet-style representation that contains melodies, chords, and core music attributes including key, tempo, position, pitch, duration, and velocity.
|
| 22 |
+
|
| 23 |
+
### Emotion Details
|
| 24 |
+
|
| 25 |
+
We first propose a unified emotion representation that consists of emotion category and emotion intensity.
|
| 26 |
+
|
| 27 |
+
- **Emotion category:** It refers to semantic labels that describe the qualitative type of perceived emotional experience. We define emotion category as a fixed set of 28 discrete emotion words from Russell’s model: happy, delighted, excited, astonished, aroused, tense, alarmed, angry, afraid, annoyed, distressed, frustrated, miserable, sad, gloomy, depressed, bored, droopy, tired, sleepy, calm, relaxed, satisfied, at ease, content, serene, glad, and pleased.
|
| 28 |
+
|
| 29 |
+
- **Emotion intensity:** It denotes the quantitative degree of per-
|
| 30 |
+
ceived certainty in the emotional experience. We measure emotion intensity on a normalized continuous scale ranging from 0 to 1, with increments of 0.1.
|
| 31 |
+
|
| 32 |
+
Based on the unified emotion representation, we performed static and dynamic annotations on the MVEmo dataset.
|
| 33 |
+
|
| 34 |
+
- **Static emotion:** It is labeled for each sample, with a total of 11,764 pairs of emotion categories and intensities.
|
| 35 |
+
|
| 36 |
+
- **Dynamic emotion:** It is labeled every 0.5 seconds starting from 2.5 seconds, with a total of 5,673,670 pairs of emotion categories and intensities.
|
| 37 |
+
|
| 38 |
+
|
| 39 |
+
# MVEmo-Bench
|
| 40 |
+
|
| 41 |
+
We also introduce MVEmo-Bench, a comprehensive evaluation benchmark that covers a variety of music-related emotion recognition tasks.
|
| 42 |
+
|
| 43 |
+
- **Unimodal Emotion Recognition:** It focuses on predicting emotions from a single type of music-related input, including lyrics, videos, and music. In MVEmo-Bench, this task is
|
| 44 |
+
further extended to include both static emotion recognition and dynamic emotion recognition.
|
| 45 |
+
|
| 46 |
+
- **Multimodal emotion recognition:** Multimodal emotion recognition aims to predict emotions by simultaneously analyzing multiple music-related modalities.
|
| 47 |
+
|
| 48 |
+
## Baselines
|
| 49 |
+
|
| 50 |
+
### Lyrics Emotion Recognition
|
| 51 |
+
|
| 52 |
+
- Bi-LSTM
|
| 53 |
+
- Bi-GRU
|
| 54 |
+
- XLM-EMO
|
| 55 |
+
- Baichuan-2-8B
|
| 56 |
+
- Llama-3.1-8B
|
| 57 |
+
- Qwen3-8B
|
| 58 |
+
|
| 59 |
+
### Music Emotion Recognition
|
| 60 |
+
|
| 61 |
+
- Music2Emo
|
| 62 |
+
- MERT
|
| 63 |
+
- M3BERT
|
| 64 |
+
- UIBK-DBIS
|
| 65 |
+
- Mirable
|
| 66 |
+
|
| 67 |
+
### Video Emotion Recognition
|
| 68 |
+
|
| 69 |
+
- VAANet
|
| 70 |
+
- CTEN
|
| 71 |
+
- LLaVA-Video-7B
|
| 72 |
+
- InternVL3-8B
|
| 73 |
+
- Qwen2.5-VL-7B
|
| 74 |
+
|
| 75 |
+
### Multimodal Emotion Recognition
|
| 76 |
+
|
| 77 |
+
- AnyGPT-7B
|
| 78 |
+
- EMOVA-7B
|
| 79 |
+
- VITA-1.5-7B
|
| 80 |
+
- MiniCPM-o-2.6-8B
|
| 81 |
+
- Qwen2.5-Omni-7B
|
| 82 |
+
|
| 83 |
+
## Evaluation Metrics
|
| 84 |
+
|
| 85 |
+
- Precision (P), Recall (R) and F1 Score: P represents the proportion of correctly predicted labels, and R measures the model’s ability to capture all ground truth. The F1 Score, defined as the harmonic mean of P and R, offers a balanced evaluation of both the accuracy and completeness of the model’s predictions.
|
| 86 |
+
|
| 87 |
+
- Emotion Distance (De): In addition to standard classification metrics, we introduce Emotion Distance (De), to quantify the difference between the model’s predictions and the original annotations in terms of emotion alignment. It calculates the Euclidean distance between the original and predicted emotion labels in the polar coordinate system using
|
| 88 |
+
our representation.
|
| 89 |
+
|
| 90 |
+
- Error Rate (ER): Given the characteristics of natural language outputs, we introduce a new metric, Error Rate (ER) to quantify the proportion of outputs that are either ill-formatted or non-compliant with the predefined sets.
|
| 91 |
+
|
| 92 |
+
## Results
|
| 93 |
+
|
| 94 |
+
The results are presented in the following table.
|
| 95 |
+
|
| 96 |
+
<img src='imgs/results.jpg'>
|
imgs/results.jpg
ADDED
|
Git LFS Details
|