Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,93 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: gpl-3.0
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: gpl-3.0
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
+
# Hi, I’m Seniru Epasinghe 👋
|
| 6 |
+
|
| 7 |
+
I’m an AI undergraduate and an AI enthusiast, working on machine learning projects and open-source contributions.
|
| 8 |
+
I enjoy exploring AI pipelines, natural language processing, and building tools that make development easier.
|
| 9 |
+
|
| 10 |
+
## 🌐 Connect with me
|
| 11 |
+
|
| 12 |
+
[](https://huggingface.co/seniruk)
|
| 13 |
+
[](https://medium.com/@senirukepasinghe)
|
| 14 |
+
[](https://www.linkedin.com/in/seniru-epasinghe-b34b86232/)
|
| 15 |
+
[](https://github.com/seth2k2)
|
| 16 |
+
---
|
| 17 |
+
|
| 18 |
+
# Multimodal Emotion Recognition Dataset (Processed from MELD)
|
| 19 |
+
|
| 20 |
+
This dataset is a **preprocessed and balanced version** of the [MELD Dataset](https://www.kaggle.com/datasets/zaber666/meld-dataset), designed for **multimodal emotion recognition research**.
|
| 21 |
+
It combines **text, audio, and video modalities**, each represented by a set of **emotion probability distributions** predicted by pretrained or custom-trained models.
|
| 22 |
+
|
| 23 |
+
---
|
| 24 |
+
|
| 25 |
+
## Overview
|
| 26 |
+
|
| 27 |
+
| Feature | Description |
|
| 28 |
+
|----------|--------------|
|
| 29 |
+
| **Total Samples** | 4,000 utterances |
|
| 30 |
+
| **Modalities** | Text, Audio, Video |
|
| 31 |
+
| **Balanced Emotions** | Each emotion class is approximately balanced |
|
| 32 |
+
| **Cleaned Samples** | Videos with unclear or no facial detection removed |
|
| 33 |
+
| **Emotion Labels** | `['angry', 'disgust', 'fear', 'happy', 'neutral', 'sad', 'surprise']` |
|
| 34 |
+
|
| 35 |
+
Each row in the dataset corresponds to a single utterance, along with emotion label, file name, and predicted emotion probabilities per modality.
|
| 36 |
+
|
| 37 |
+
---
|
| 38 |
+
|
| 39 |
+
## Example Entry
|
| 40 |
+
|
| 41 |
+
| Utterance | Emotion | File_Name | MultiModel Predictions |
|
| 42 |
+
|------------|----------|------------|----------------|
|
| 43 |
+
| You are going to a clinic! | disgust | dia127_utt3.mp4 | {"video": [0.7739, 0.0, 0.0, 0.0783, 0.1217, 0.0174, 0.0087], "audio": [0.0, 1.0, 0.0, 0.0, 0.0, 0.0, 0.0], "text": [0.0005, 0.0, 0.0, 0.0007, 0.998, 0.0004, 0.0004]} |
|
| 44 |
+
|
| 45 |
+
### Column Description:
|
| 46 |
+
- **Utterance** — spoken text in the conversation.
|
| 47 |
+
- **Emotion** — gold-standard emotion label.
|
| 48 |
+
- **File_Name** — corresponding video file (utterance-level).
|
| 49 |
+
- **MultiModel Predictions** — JSON object containing model-predicted emotion probability vectors for each modality.
|
| 50 |
+
|
| 51 |
+
---
|
| 52 |
+
|
| 53 |
+
## Modality Emotion Extraction
|
| 54 |
+
|
| 55 |
+
Each modality’s emotion vector was generated independently using specialized models:
|
| 56 |
+
|
| 57 |
+
| Modality | Model / Method | Description |
|
| 58 |
+
|-----------|----------------|--------------|
|
| 59 |
+
| **Video** | [`python-fer`](https://github.com/justinshenk/fer) | Facial expression recognition using CNN-based FER library. |
|
| 60 |
+
| **Audio** | [`Custom-trained CNN model`](https://medium.com/@senirukepasinghe/speech-emotion-recognition-with-cnn-8e3c2cbc8375) | Trained on Mel spectrogram features for emotion classification. |
|
| 61 |
+
| **Text** | [`arpanghoshal/EmoRoBERTa`](https://huggingface.co/arpanghoshal/EmoRoBERTa) | Transformer-based text emotion model fine-tuned on GoEmotions dataset. |
|
| 62 |
+
|
| 63 |
+
---
|
| 64 |
+
|
| 65 |
+
## Format and Usage
|
| 66 |
+
|
| 67 |
+
- File format: **CSV**
|
| 68 |
+
- Recommended columns:
|
| 69 |
+
- `Utterance`
|
| 70 |
+
- `Emotion`
|
| 71 |
+
- `File_Name`
|
| 72 |
+
- `Final_Emotion` (JSON: `{ "video": [...], "audio": [...], "text": [...] }`)
|
| 73 |
+
|
| 74 |
+
This dataset is ideal for:
|
| 75 |
+
- **Fusion model training**
|
| 76 |
+
- **Fine-tuning multimodal emotion models**
|
| 77 |
+
- **Benchmarking emotion fusion strategies**
|
| 78 |
+
- **Ablation studies on modality importance**
|
| 79 |
+
|
| 80 |
+
---
|
| 81 |
+
|
| 82 |
+
## Citation
|
| 83 |
+
|
| 84 |
+
References for the original MELD Dataset
|
| 85 |
+
- S. Poria, D. Hazarika, N. Majumder, G. Naik, R. Mihalcea, E. Cambria. MELD: A Multimodal Multi-Party Dataset for Emotion Recognition in Conversation (2018).
|
| 86 |
+
- Chen, S.Y., Hsu, C.C., Kuo, C.C. and Ku, L.W. EmotionLines: An Emotion Corpus of Multi-Party Conversations. arXiv preprint arXiv:1802.08379 (2018).
|
| 87 |
+
|
| 88 |
+
---
|
| 89 |
+
|
| 90 |
+
## License & Acknowledgments
|
| 91 |
+
|
| 92 |
+
This dataset is a **derivative work** of MELD, used here for research and educational purposes.
|
| 93 |
+
All credit for the original dataset goes to the **MELD authors** and contributors.
|