You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Peter Griffin's Emotion-Tagged DailyTalk Dataset

Hehehehehe! Hey Lois, look! I made a dataset! This is an emotion-tagged version of that DailyTalk thing, but better because it's got all sorts of feelings and stuff. It's freakin' sweet for making computers talk like they've had too many Pawtucket Patriots or just found out Meg is home.

What is this thing?

This dataset has a bunch of people talking, but we tagged 'em with emotions. It's like when I'm happy because it's Chicken Fight day, or sad because the Drunken Clam is closed. We got acoustic features too, which I think is a fancy word for "how loud I'm yelling."

Where'd it come from?

It's based on DailyTalkContiguous. They used stereo recordings, so one guy is in your left ear and the other is in your right. It's like having Joe and Quagmire whispering secrets to me at the same time.

How'd we make it? (The Science-y Stuff)

  1. Chopping up Audio: We took the big files and cut 'em into little pieces. Road House!

  2. Extracting Stuff: We pulled out features like:

    • VAD Features: Arousal (giggity), dominance (like me in a cape), and valence.
    • Prosody: How fast they talk and how much their voice jumps around.
    • Audio Bits: RMS energy, pitch (f0), and other nerd stuff.
  3. Emotion Tagging: Every piece gets a label. Some of 'em are:

    • depressed (like Brian when he's out of martinis)
    • shouting (me, all the time)
    • whispering (me, trying to sneak a snack)
    • soft tone
    • worried
    • calm
    • sad
    • And more!
  4. Bucketing: We put things into LOW/MID/HIGH buckets. It's like how I categorize my favorite snacks.

Dataset Structure

.
β”œβ”€β”€ DailyTalkContiguous/
β”‚   β”œβ”€β”€ data_stereo/
β”‚   β”‚   β”œβ”€β”€ 0.wav, 1.wav, ...
β”‚   β”‚   └── 0.json, 1.json, ...
β”‚   └── dailytalk.jsonl
└── transcript.jsonl

transcript.jsonl Format

Each line is a JSON thingy. Here's what's inside:

  • segment_id: A number, like how many times I've fallen down the stairs.
  • audio_file: The name of the sound.
  • channel: Left or right.
  • start/end: When the talking starts and stops.
  • vad: The feelings numbers.
  • features: The sound numbers (pitch, speed, etc.).
  • text: What they're actually saying (hopefully it's about beer).
  • tag: The emotion label.
  • tagged_text: The text with the emotion stuck on the front.

Example:

{
  "segment_id": 0,
  "audio_file": "0.wav",
  "channel": "left",
  "start": 1.634,
  "end": 5.63,
  "duration": 3.996,
  "vad": {
    "arousal": 0.002,
    "dominance": 0.0,
    "valence": 0.0
  },
  "features": {
    "rms": 0.106,
    "zcr": 0.087,
    "f0_mean": 152.55,
    "f0_std": 46.69,
    "speech_rate": 4.87
  },
  "text_raw": "I'm figuring out all of my budgets.",
  "text": "I'm figuring out all of my budgets.",
  "audio_path": "DailyTalkContiguous/data_stereo/0.wav",
  "vad_bucket": {
    "arousal": "LOW",
    "dominance": "LOW",
    "valence": "LOW"
  },
  "prosody_bucket": {
    "energy": "MID",
    "rate": "MID",
    "zcr": "LOW",
    "pitch_var": "HIGH"
  },
  "tag": "depressed",
  "tagged_text": "(depressed) I'm figuring out all of my budgets."
}

How to use it (If you're not as smart as Brian)

import json

# Load the stuff
with open('transcript.jsonl', 'r') as f:
    segments = [json.loads(line) for line in f]

# Look at a piece
segment = segments[0]
print(f"They said: {segment['text']}")
print(f"They felt: {segment['tag']}")
print(f"Audio is here: {segment['audio_path']}")
print(f"Time range: {segment['start']:.2f}s - {segment['end']:.2f}s")

Use Cases

  • Emotion Recognition Training: Train models to recognize when someone is upset (like when I forget our anniversary).
  • Emotion-Controllable TTS: Generate speech with specific emotional characteristics.
  • Prosody Analysis: Study the relationship between emotion and speech prosody.
  • Data Augmentation: Use emotion tags for synthetic data generation.

License

This dataset inherits the CC-BY-SA 4.0 license from the original DailyTalk dataset.

Citation

If you use this dataset, please cite the original DailyTalk dataset:

@dataset{dailytalk,
  title={DailyTalk: A High-Quality Multi-Turn Dialogue Corpus for Conversational Speech Synthesis},
  author={Lee, Keon and Yang, Hyeongseok and Park, Jiyoun and Choi, Seong-Hoon and Kim, Nam Soo},
  year={2021},
  publisher={GitHub},
  howpublished={\url{https://github.com/keonlee9420/DailyTalk}}
}

Acknowledgments

This dataset is derived from the DailyTalk project by Keon Lee et al. The original dataset provides word-level timestamps for conversational speech, which made this emotion-tagging extension possible.

Downloads last month
21