Datasets:
Jiaxing Yu
commited on
Commit
·
95c8a91
1
Parent(s):
289f1df
add MVEmo dataset
Browse files
README.md
CHANGED
|
@@ -4,7 +4,6 @@ task_categories:
|
|
| 4 |
- text-classification
|
| 5 |
language:
|
| 6 |
- en
|
| 7 |
-
- zh
|
| 8 |
tags:
|
| 9 |
- multimodal
|
| 10 |
- lyrics
|
|
@@ -14,6 +13,11 @@ tags:
|
|
| 14 |
pretty_name: MVEmo Music Video Emotion Dataset
|
| 15 |
size_categories:
|
| 16 |
- 10K<n<100K
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 17 |
---
|
| 18 |
# MVEmo
|
| 19 |
|
|
@@ -25,41 +29,33 @@ This is the dataset repository for the paper: Bridging Categorical and Dimension
|
|
| 25 |
|
| 26 |
MVEmo is a large-scale multimodal dataset that consists of 11,764 music video samples with both static and dynamic emotion annotations for music-related emotion recognition (MRER). It consists of the following key features:
|
| 27 |
|
| 28 |
-
- **Basic Information:** title, artist,
|
| 29 |
- **Different Modalities:** lyrics, video, and music.
|
| 30 |
- **Rich Emotion Annotations:** static and dynamic emotion.
|
| 31 |
|
| 32 |
### Modality Details
|
| 33 |
|
| 34 |
- **Lyrics:** The dataset contains 7,923 samples with lyrics and 3,841 samples without lyrics. We query online lyric databases using the song title and artist to retrieve official lyrics. If the lyrics are not retrievable, we employ an automated transcription method on the audio from the music videos, beginning with voice separation by Demucs, followed by multilingual speech recognition using Whisper.
|
| 35 |
-
|
| 36 |
-
- **
|
| 37 |
-
|
| 38 |
-
- **Music:** In our dataset, we provide both the audio music and symbolic music. For symbolic music, we employ a lead sheet-style representation that contains melodies, chords, and core music attributes including key, tempo, position, pitch, duration, and velocity.
|
| 39 |
|
| 40 |
### Emotion Details
|
| 41 |
|
| 42 |
-
We first propose a unified emotion representation that consists of emotion category and emotion intensity.
|
| 43 |
|
| 44 |
- **Emotion category:** It refers to semantic labels that describe the qualitative type of perceived emotional experience. We define emotion category as a fixed set of 28 discrete emotion words from Russell’s model: happy, delighted, excited, astonished, aroused, tense, alarmed, angry, afraid, annoyed, distressed, frustrated, miserable, sad, gloomy, depressed, bored, droopy, tired, sleepy, calm, relaxed, satisfied, at ease, content, serene, glad, and pleased.
|
|
|
|
| 45 |
|
| 46 |
-
|
| 47 |
-
ceived certainty in the emotional experience. We measure emotion intensity on a normalized continuous scale ranging from 0 to 1, with increments of 0.1.
|
| 48 |
-
|
| 49 |
-
Based on the unified emotion representation, we performed static and dynamic annotations on the MVEmo dataset.
|
| 50 |
|
| 51 |
- **Static emotion:** It is labeled for each sample, with a total of 11,764 pairs of emotion categories and intensities.
|
| 52 |
-
|
| 53 |
- **Dynamic emotion:** It is labeled every 0.5 seconds starting from 2.5 seconds, with a total of 5,673,670 pairs of emotion categories and intensities.
|
| 54 |
|
| 55 |
-
|
| 56 |
# MVEmo-Bench
|
| 57 |
|
| 58 |
We also introduce MVEmo-Bench, a comprehensive evaluation benchmark that covers a variety of music-related emotion recognition tasks.
|
| 59 |
|
| 60 |
-
- **Unimodal Emotion Recognition:** It focuses on predicting emotions from a single type of music-related input, including lyrics, videos, and music. In MVEmo-Bench, this task is
|
| 61 |
-
further extended to include both static emotion recognition and dynamic emotion recognition.
|
| 62 |
-
|
| 63 |
- **Multimodal emotion recognition:** Multimodal emotion recognition aims to predict emotions by simultaneously analyzing multiple music-related modalities.
|
| 64 |
|
| 65 |
## Baselines
|
|
@@ -100,10 +96,7 @@ further extended to include both static emotion recognition and dynamic emotion
|
|
| 100 |
## Evaluation Metrics
|
| 101 |
|
| 102 |
- Precision (P), Recall (R) and F1 Score: P represents the proportion of correctly predicted labels, and R measures the model’s ability to capture all ground truth. The F1 Score, defined as the harmonic mean of P and R, offers a balanced evaluation of both the accuracy and completeness of the model’s predictions.
|
| 103 |
-
|
| 104 |
-
- Emotion Distance (De): In addition to standard classification metrics, we introduce Emotion Distance (De), to quantify the difference between the model’s predictions and the original annotations in terms of emotion alignment. It calculates the Euclidean distance between the original and predicted emotion labels in the polar coordinate system using
|
| 105 |
-
our representation.
|
| 106 |
-
|
| 107 |
- Error Rate (ER): Given the characteristics of natural language outputs, we introduce a new metric, Error Rate (ER) to quantify the proportion of outputs that are either ill-formatted or non-compliant with the predefined sets.
|
| 108 |
|
| 109 |
## Results
|
|
|
|
| 4 |
- text-classification
|
| 5 |
language:
|
| 6 |
- en
|
|
|
|
| 7 |
tags:
|
| 8 |
- multimodal
|
| 9 |
- lyrics
|
|
|
|
| 13 |
pretty_name: MVEmo Music Video Emotion Dataset
|
| 14 |
size_categories:
|
| 15 |
- 10K<n<100K
|
| 16 |
+
configs:
|
| 17 |
+
- config_name: default
|
| 18 |
+
data_files:
|
| 19 |
+
- split: train
|
| 20 |
+
path: "mvemo_dataset.jsonl"
|
| 21 |
---
|
| 22 |
# MVEmo
|
| 23 |
|
|
|
|
| 29 |
|
| 30 |
MVEmo is a large-scale multimodal dataset that consists of 11,764 music video samples with both static and dynamic emotion annotations for music-related emotion recognition (MRER). It consists of the following key features:
|
| 31 |
|
| 32 |
+
- **Basic Information:** title, artist, genres, language, and Youtube ID.
|
| 33 |
- **Different Modalities:** lyrics, video, and music.
|
| 34 |
- **Rich Emotion Annotations:** static and dynamic emotion.
|
| 35 |
|
| 36 |
### Modality Details
|
| 37 |
|
| 38 |
- **Lyrics:** The dataset contains 7,923 samples with lyrics and 3,841 samples without lyrics. We query online lyric databases using the song title and artist to retrieve official lyrics. If the lyrics are not retrievable, we employ an automated transcription method on the audio from the music videos, beginning with voice separation by Demucs, followed by multilingual speech recognition using Whisper.
|
| 39 |
+
- **Video:** We download the video from Youtube ID and filter out visually static videos. All remaining videos are further transcoded into the MPEG-4 format and uniformly resampled to 30 fps to ensure consistency across the entire MVEmo dataset.
|
| 40 |
+
- **Music:** In our dataset, we provide both the audio music and symbolic music. We extract the audio music from the downloaded video. For symbolic music, we employ a lead sheet-style representation that contains melodies, chords, and core music attributes including key, tempo, position, pitch, duration, and velocity.
|
|
|
|
|
|
|
| 41 |
|
| 42 |
### Emotion Details
|
| 43 |
|
| 44 |
+
We first propose a unified emotion representation that consists of emotion category and emotion intensity.
|
| 45 |
|
| 46 |
- **Emotion category:** It refers to semantic labels that describe the qualitative type of perceived emotional experience. We define emotion category as a fixed set of 28 discrete emotion words from Russell’s model: happy, delighted, excited, astonished, aroused, tense, alarmed, angry, afraid, annoyed, distressed, frustrated, miserable, sad, gloomy, depressed, bored, droopy, tired, sleepy, calm, relaxed, satisfied, at ease, content, serene, glad, and pleased.
|
| 47 |
+
- **Emotion intensity:** It denotes the quantitative degree of perceived certainty in the emotional experience. We measure emotion intensity on a normalized continuous scale ranging from 0 to 1, with increments of 0.1.
|
| 48 |
|
| 49 |
+
Based on the unified emotion representation, we performed static and dynamic annotations on the MVEmo dataset.
|
|
|
|
|
|
|
|
|
|
| 50 |
|
| 51 |
- **Static emotion:** It is labeled for each sample, with a total of 11,764 pairs of emotion categories and intensities.
|
|
|
|
| 52 |
- **Dynamic emotion:** It is labeled every 0.5 seconds starting from 2.5 seconds, with a total of 5,673,670 pairs of emotion categories and intensities.
|
| 53 |
|
|
|
|
| 54 |
# MVEmo-Bench
|
| 55 |
|
| 56 |
We also introduce MVEmo-Bench, a comprehensive evaluation benchmark that covers a variety of music-related emotion recognition tasks.
|
| 57 |
|
| 58 |
+
- **Unimodal Emotion Recognition:** It focuses on predicting emotions from a single type of music-related input, including lyrics, videos, and music. In MVEmo-Bench, this task is further extended to include both static emotion recognition and dynamic emotion recognition.
|
|
|
|
|
|
|
| 59 |
- **Multimodal emotion recognition:** Multimodal emotion recognition aims to predict emotions by simultaneously analyzing multiple music-related modalities.
|
| 60 |
|
| 61 |
## Baselines
|
|
|
|
| 96 |
## Evaluation Metrics
|
| 97 |
|
| 98 |
- Precision (P), Recall (R) and F1 Score: P represents the proportion of correctly predicted labels, and R measures the model’s ability to capture all ground truth. The F1 Score, defined as the harmonic mean of P and R, offers a balanced evaluation of both the accuracy and completeness of the model’s predictions.
|
| 99 |
+
- Emotion Distance (De): In addition to standard classification metrics, we introduce Emotion Distance (De), to quantify the difference between the model’s predictions and the original annotations in terms of emotion alignment. It calculates the Euclidean distance between the original and predicted emotion labels in the polar coordinate system using our representation.
|
|
|
|
|
|
|
|
|
|
| 100 |
- Error Rate (ER): Given the characteristics of natural language outputs, we introduce a new metric, Error Rate (ER) to quantify the proportion of outputs that are either ill-formatted or non-compliant with the predefined sets.
|
| 101 |
|
| 102 |
## Results
|