modified_shemo / README.md
aliyzd95's picture
Update README.md
51abda3 verified
---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: speaker_id
dtype: string
- name: gender
dtype: string
- name: emotion
dtype: string
- name: transcript
dtype: string
- name: ipa
dtype: string
splits:
- name: train
num_bytes: 1116848970
num_examples: 3000
download_size: 1043404503
dataset_size: 1116848970
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- audio-classification
- automatic-speech-recognition
language:
- fa
tags:
- ser
- speech-emotion-recognition
- asr
- farsi
- persian
pretty_name: Modified ShEMO
---
# Dataset Card for Modified SHEMO
## Dataset Summary
This dataset is a corrected and modified version of the **Sharif Emotional Speech Database (ShEMO)** named [**modified_shemo**](https://github.com/aliyzd95/modified-shemo). The original dataset contained significant mismatches between audio files and their corresponding transcriptions. This version resolves those issues, resulting in a cleaner and more reliable resource for Persian Speech Emotion Recognition (SER) and Automatic Speech Recognition (ASR).
### Curation and Correction
The original ShEMO dataset suffered from incorrectly named transcription files, leading to a high baseline Word Error Rate (WER). We addressed this by:
1. Using an ASR system with a 4-gram language model to identify and re-align mismatched audio-text pairs.
2. Correcting 347 files that had high error rates.
This modification process significantly improved the dataset's quality, **reducing the overall WER from 51.97% to 30.79%**. More details are available at this [GitHub repository](https://github.com/aliyzd95/modified-shemo) and in this [paper](https://www.researchgate.net/publication/365613775_A_Persian_ASR-based_SER_Modification_of_Sharif_Emotional_Speech_Database_and_Investigation_of_Persian_Text_Corpora).
## Dataset Statistics
Below are the detailed statistics for this dataset, calculated using an analysis script.
### Emotion Distribution
```text
==================================================
Emotion Distribution
==================================================
Emotion Percentage (%)
neutral 38.66
happy 06.76
sad 12.03
angry 34.73
surprise 06.73
fear 01.06
--------------------------------------------------
```
### Overall Statistics
```text
============================================================
Overall Statistics for Modified SHEMO Dataset
============================================================
📊 Total Number of Files: 3000
⏰ Total Dataset Duration: 3.43 hours
🗣️ Total Number of Speakers: 87
------------------------------------------------------------
```
### Gender-Emotion Breakdown
```text
==================================================
Gender-Emotion Breakdown
==================================================
gender female male
emotion
anger 449 593
fear 20 12
happiness 113 90
neutral 346 814
sadness 224 137
surprise 111 91
--------------------------------------------------
```
## Supported Tasks
* **Speech Emotion Recognition:** The dataset is ideal for training models to recognize various emotions from speech. The `emotion` column is used for this task.
* **Automatic Speech Recognition:** With precise transcriptions available in the `transcript` column, the dataset can be used to train ASR models for the Persian language.
## Languages
The primary language of this dataset is **Persian (Farsi)**.
## Dataset Structure
### Data Instances
An example from the dataset looks as follows:
```json
{
"audio": {
"path": "/path/to/F21N05.wav",
"array": ...,
"sampling_rate": 16000
},
"speaker_id": "F21",
"gender": "female",
"emotion": "neutral",
"transcript": "مگه من به تو نگفته بودم که باید راجع به دورانت سکوت کنی؟",
"ipa": "mæge mæn be to nægofte budæm ke bɑyæd rɑdʒeʔ be dorɑnt sokut koni"
}
```
### Data Fields
* `audio`: A dictionary containing the audio file's array and its sampling rate (set to 16kHz in this dataset).
* `speaker_id`: A unique identifier for each speaker (e.g., `F01` or `M23`).
* `gender`: The gender of the speaker (`female` or `male`).
* `emotion`: The emotion label for each audio file (`angry`, `fear`, `happy`, `sad`, `surprise`, `neutral`).
* `transcript`: The precise orthographic transcription of the utterance in Persian.
* `ipa`: The phonetic transcription of the utterance according to the IPA standard.
### Data Splits
The dataset does not have predefined `train`, `validation`, and `test` splits by default. Users can easily create their own splits using the `.train_test_split()` method from the `datasets` library.
## How to Use
To load the dataset, use the `datasets` library:
```python
from datasets import load_dataset
dataset = load_dataset("aliyzd95/modified_shemo")
print(dataset["train"][0])
```
## Additional Information
### Citation Information
If you use this dataset in your research, please cite the original paper:
```bibtex
@misc{https://doi.org/10.48550/arxiv.2211.09956,
doi = {10.48550/ARXIV.2211.09956},
url = {https://arxiv.org/abs/2211.09956},
author = {Yazdani, Ali and Shekofteh, Yasser},
keywords = {Audio and Speech Processing (eess.AS), Artificial Intelligence (cs.AI), Sound (cs.SD), FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Computer and information sciences, FOS: Computer and information sciences, I.2, 68T10 (Primary) 68T50, 68T07 (Secondary)},
title = {A Persian ASR-based SER: Modification of Sharif Emotional Speech Database and Investigation of Persian Text Corpora},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```