|
|
--- |
|
|
license: cc-by-sa-4.0 |
|
|
task_categories: |
|
|
- audio-to-audio |
|
|
- automatic-speech-recognition |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- emotion |
|
|
- speech |
|
|
- TTS |
|
|
- emotion-tagging |
|
|
--- |
|
|
|
|
|
# Moody Girl: Emotion-Tagged DailyTalk Dataset |
|
|
|
|
|
This dataset is an emotion-tagged version of the [DailyTalk](https://github.com/keonlee9420/DailyTalk) dataset, enhanced with emotion labels and prosodic features for Text-to-Speech (TTS) and emotion recognition tasks. |
|
|
|
|
|
## Dataset Description |
|
|
|
|
|
This dataset contains emotion-labeled speech segments with corresponding text transcriptions. Each segment is tagged with an emotion label and includes comprehensive acoustic features extracted from the audio. |
|
|
|
|
|
### Data Source |
|
|
Based on [DailyTalkContiguous](https://github.com/keonlee9420/DailyTalk), which uses stereo recordings where the two speakers in each conversation are placed on the left and right channels respectively. |
|
|
|
|
|
### Data Generation Process |
|
|
|
|
|
1. **Audio Segmentation**: Audio files were segmented using the word-level timestamps from the original DailyTalk dataset. |
|
|
|
|
|
2. **Feature Extraction**: For each audio segment, the following features were extracted: |
|
|
- **VAD Features**: Arousal, dominance, valence scores |
|
|
- **Prosodic Features**: Energy, speech rate, zero-crossing rate, pitch variation |
|
|
- **Audio Features**: RMS energy, fundamental frequency (f0), speech rate |
|
|
|
|
|
3. **Emotion Tagging**: Each segment was tagged with an emotion label based on the combination of VAD features and prosodic characteristics. Common tags include: |
|
|
- depressed |
|
|
- shouting |
|
|
- whispering |
|
|
- soft tone |
|
|
- worried |
|
|
- calm |
|
|
- sad |
|
|
- And more... |
|
|
|
|
|
4. **Categorization**: VAD and prosodic features were bucketed into LOW/MID/HIGH categories for easier analysis. |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
``` |
|
|
. |
|
|
├── DailyTalkContiguous/ |
|
|
│ ├── data_stereo/ |
|
|
│ │ ├── 0.wav, 1.wav, ... |
|
|
│ │ └── 0.json, 1.json, ... |
|
|
│ └── dailytalk.jsonl |
|
|
└── transcript.jsonl |
|
|
``` |
|
|
|
|
|
### transcript.jsonl Format |
|
|
|
|
|
Each line in `transcript.jsonl` contains a JSON object with the following fields: |
|
|
|
|
|
- `segment_id`: Unique identifier for the segment |
|
|
- `audio_file`: Source audio file name |
|
|
- `channel`: Left or right channel in stereo recording |
|
|
- `start`: Start time in seconds |
|
|
- `end`: End time in seconds |
|
|
- `duration`: Segment duration in seconds |
|
|
- `vad`: VAD features (arousal, dominance, valence) |
|
|
- `features`: Audio features (rms, zcr, f0_mean, f0_std, speech_rate) |
|
|
- `text_raw`: Original transcript text |
|
|
- `text`: Processed transcript text |
|
|
- `audio_path`: Full path to audio file |
|
|
- `vad_bucket`: Bucketed VAD categories |
|
|
- `prosody_bucket`: Bucketed prosodic categories |
|
|
- `tag`: Emotion label |
|
|
- `tagged_text`: Text with emotion tag prepended |
|
|
|
|
|
Example: |
|
|
```json |
|
|
{ |
|
|
"segment_id": 0, |
|
|
"audio_file": "0.wav", |
|
|
"channel": "left", |
|
|
"start": 1.634, |
|
|
"end": 5.63, |
|
|
"duration": 3.996, |
|
|
"vad": { |
|
|
"arousal": 0.002, |
|
|
"dominance": 0.0, |
|
|
"valence": 0.0 |
|
|
}, |
|
|
"features": { |
|
|
"rms": 0.106, |
|
|
"zcr": 0.087, |
|
|
"f0_mean": 152.55, |
|
|
"f0_std": 46.69, |
|
|
"speech_rate": 4.87 |
|
|
}, |
|
|
"text_raw": "I'm figuring out all of my budgets.", |
|
|
"text": "I'm figuring out all of my budgets.", |
|
|
"audio_path": "DailyTalkContiguous/data_stereo/0.wav", |
|
|
"vad_bucket": { |
|
|
"arousal": "LOW", |
|
|
"dominance": "LOW", |
|
|
"valence": "LOW" |
|
|
}, |
|
|
"prosody_bucket": { |
|
|
"energy": "MID", |
|
|
"rate": "MID", |
|
|
"zcr": "LOW", |
|
|
"pitch_var": "HIGH" |
|
|
}, |
|
|
"tag": "depressed", |
|
|
"tagged_text": "(depressed) I'm figuring out all of my budgets." |
|
|
} |
|
|
``` |
|
|
|
|
|
## Usage |
|
|
|
|
|
```python |
|
|
import json |
|
|
|
|
|
# Load the transcript data |
|
|
with open('transcript.jsonl', 'r') as f: |
|
|
segments = [json.loads(line) for line in f] |
|
|
|
|
|
# Access a segment |
|
|
segment = segments[0] |
|
|
print(f"Text: {segment['text']}") |
|
|
print(f"Emotion: {segment['tag']}") |
|
|
print(f"Audio file: {segment['audio_path']}") |
|
|
print(f"Time range: {segment['start']:.2f}s - {segment['end']:.2f}s") |
|
|
``` |
|
|
|
|
|
## Use Cases |
|
|
|
|
|
- **Emotion Recognition Training**: Train models to recognize emotions from speech |
|
|
- **Emotion-Controllable TTS**: Generate speech with specific emotional characteristics |
|
|
- **Prosody Analysis**: Study the relationship between emotion and speech prosody |
|
|
- **Data Augmentation**: Use emotion tags for synthetic data generation |
|
|
|
|
|
## License |
|
|
|
|
|
This dataset inherits the CC-BY-SA 4.0 license from the original [DailyTalk](https://github.com/keonlee9420/DailyTalk) dataset. |
|
|
|
|
|
## Citation |
|
|
|
|
|
If you use this dataset, please cite the original DailyTalk dataset: |
|
|
|
|
|
```bibtex |
|
|
@dataset{dailytalk, |
|
|
title={DailyTalk: A High-Quality Multi-Turn Dialogue Corpus for Conversational Speech Synthesis}, |
|
|
author={Lee, Keon and Yang, Hyeongseok and Park, Jiyoun and Choi, Seong-Hoon and Kim, Nam Soo}, |
|
|
year={2021}, |
|
|
publisher={GitHub}, |
|
|
howpublished={\url{https://github.com/keonlee9420/DailyTalk}} |
|
|
} |
|
|
``` |
|
|
|
|
|
## Acknowledgments |
|
|
|
|
|
This dataset is derived from the [DailyTalk](https://github.com/keonlee9420/DailyTalk) project by Keon Lee et al. The original dataset provides word-level timestamps for conversational speech, which made this emotion-tagging extension possible. |
|
|
|