seniruk's picture
Update README.md
95249d6 verified
metadata
license: gpl-3.0
language:
  - en
tags:
  - emotion
  - speech
  - facial
  - text
  - semantic
  - multimodel
size_categories:
  - 1K<n<10K

Hi, I’m Seniru Epasinghe 👋

I’m an AI undergraduate and an AI enthusiast, working on machine learning projects and open-source contributions.
I enjoy exploring AI pipelines, natural language processing, and building tools that make development easier.

🌐 Connect with me

Hugging Face    Medium    LinkedIn    GitHub

Multimodal Emotion Recognition Dataset (Processed from MELD)

This dataset is a preprocessed and balanced version of the MELD Dataset, designed for multimodal emotion recognition research.
It combines text, audio, and video modalities, each represented by a set of emotion probability distributions predicted by pretrained or custom-trained models.

Overview

Feature Description
Total Samples 4,000 utterances
Modalities Text, Audio, Video
Balanced Emotions Each emotion class is approximately balanced
Cleaned Samples Videos with unclear or no facial detection removed
Emotion Labels ['angry', 'disgust', 'fear', 'happy', 'neutral', 'sad', 'surprise']

Each row in the dataset corresponds to a single utterance, along with emotion label, file name, and predicted emotion probabilities per modality.

Example Entry

Utterance Emotion File_Name MultiModel Predictions
You are going to a clinic! disgust dia127_utt3.mp4 {"video": [0.7739, 0.0, 0.0, 0.0783, 0.1217, 0.0174, 0.0087], "audio": [0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0], "text": [0.0005, 0.0, 0.0, 0.0007, 0.998, 0.0004, 0.0004]}

Column Description:

  • Utterance — spoken text in the conversation.
  • Emotion — gold-standard emotion label.
  • File_Name — corresponding video file (utterance-level).
  • MultiModel Predictions — JSON object containing model-predicted emotion probability vectors for each modality.

Modality Emotion Extraction

Each modality’s emotion vector was generated independently using specialized models:

Modality Model / Method Description
Video python-fer Facial expression recognition using CNN-based FER library.
Audio Custom-trained CNN model Trained on Mel spectrogram features for emotion classification.
Text arpanghoshal/EmoRoBERTa Transformer-based text emotion model fine-tuned on GoEmotions dataset.

Format and Usage

  • File format: CSV
  • Recommended columns:
    • Utterance
    • Emotion
    • File_Name
    • Final_Emotion (JSON: { "video": [...], "audio": [...], "text": [...] })

This dataset is ideal for:

  • Fusion model training
  • Fine-tuning multimodal emotion models
  • Benchmarking emotion fusion strategies
  • Ablation studies on modality importance

Citation

References for the original MELD Dataset

  • S. Poria, D. Hazarika, N. Majumder, G. Naik, R. Mihalcea, E. Cambria. MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation (2018).
  • Chen, S.Y., Hsu, C.C., Kuo, C.C. and Ku, L.W. EmotionLines: An Emotion Corpus of Multi-Party Conversations. arXiv preprint arXiv:1802.08379 (2018).

License & Acknowledgments

This dataset is a derivative work of MELD, used here for research and educational purposes.
All credit for the original dataset goes to the MELD authors and contributors.