Moody Girl: Emotion-Tagged DailyTalk Dataset
This dataset is an emotion-tagged version of the DailyTalk dataset, enhanced with emotion labels and prosodic features for Text-to-Speech (TTS) and emotion recognition tasks.
Dataset Description
This dataset contains emotion-labeled speech segments with corresponding text transcriptions. Each segment is tagged with an emotion label and includes comprehensive acoustic features extracted from the audio.
Data Source
Based on DailyTalkContiguous, which uses stereo recordings where the two speakers in each conversation are placed on the left and right channels respectively.
Data Generation Process
Audio Segmentation: Audio files were segmented using the word-level timestamps from the original DailyTalk dataset.
Feature Extraction: For each audio segment, the following features were extracted:
- VAD Features: Arousal, dominance, valence scores
- Prosodic Features: Energy, speech rate, zero-crossing rate, pitch variation
- Audio Features: RMS energy, fundamental frequency (f0), speech rate
Emotion Tagging: Each segment was tagged with an emotion label based on the combination of VAD features and prosodic characteristics. Common tags include:
- depressed
- shouting
- whispering
- soft tone
- worried
- calm
- sad
- And more...
Categorization: VAD and prosodic features were bucketed into LOW/MID/HIGH categories for easier analysis.
Dataset Structure
.
βββ DailyTalkContiguous/
β βββ data_stereo/
β β βββ 0.wav, 1.wav, ...
β β βββ 0.json, 1.json, ...
β βββ dailytalk.jsonl
βββ transcript.jsonl
transcript.jsonl Format
Each line in transcript.jsonl contains a JSON object with the following fields:
segment_id: Unique identifier for the segmentaudio_file: Source audio file namechannel: Left or right channel in stereo recordingstart: Start time in secondsend: End time in secondsduration: Segment duration in secondsvad: VAD features (arousal, dominance, valence)features: Audio features (rms, zcr, f0_mean, f0_std, speech_rate)text_raw: Original transcript texttext: Processed transcript textaudio_path: Full path to audio filevad_bucket: Bucketed VAD categoriesprosody_bucket: Bucketed prosodic categoriestag: Emotion labeltagged_text: Text with emotion tag prepended
Example:
{
"segment_id": 0,
"audio_file": "0.wav",
"channel": "left",
"start": 1.634,
"end": 5.63,
"duration": 3.996,
"vad": {
"arousal": 0.002,
"dominance": 0.0,
"valence": 0.0
},
"features": {
"rms": 0.106,
"zcr": 0.087,
"f0_mean": 152.55,
"f0_std": 46.69,
"speech_rate": 4.87
},
"text_raw": "I'm figuring out all of my budgets.",
"text": "I'm figuring out all of my budgets.",
"audio_path": "DailyTalkContiguous/data_stereo/0.wav",
"vad_bucket": {
"arousal": "LOW",
"dominance": "LOW",
"valence": "LOW"
},
"prosody_bucket": {
"energy": "MID",
"rate": "MID",
"zcr": "LOW",
"pitch_var": "HIGH"
},
"tag": "depressed",
"tagged_text": "(depressed) I'm figuring out all of my budgets."
}
Usage
import json
# Load the transcript data
with open('transcript.jsonl', 'r') as f:
segments = [json.loads(line) for line in f]
# Access a segment
segment = segments[0]
print(f"Text: {segment['text']}")
print(f"Emotion: {segment['tag']}")
print(f"Audio file: {segment['audio_path']}")
print(f"Time range: {segment['start']:.2f}s - {segment['end']:.2f}s")
Use Cases
- Emotion Recognition Training: Train models to recognize emotions from speech
- Emotion-Controllable TTS: Generate speech with specific emotional characteristics
- Prosody Analysis: Study the relationship between emotion and speech prosody
- Data Augmentation: Use emotion tags for synthetic data generation
License
This dataset inherits the CC-BY-SA 4.0 license from the original DailyTalk dataset.
Citation
If you use this dataset, please cite the original DailyTalk dataset:
@dataset{dailytalk,
title={DailyTalk: A High-Quality Multi-Turn Dialogue Corpus for Conversational Speech Synthesis},
author={Lee, Keon and Yang, Hyeongseok and Park, Jiyoun and Choi, Seong-Hoon and Kim, Nam Soo},
year={2021},
publisher={GitHub},
howpublished={\url{https://github.com/keonlee9420/DailyTalk}}
}
Acknowledgments
This dataset is derived from the DailyTalk project by Keon Lee et al. The original dataset provides word-level timestamps for conversational speech, which made this emotion-tagging extension possible.
- Downloads last month
- 19