| --- |
| language: |
| - en |
| license: cc-by-4.0 |
| task_categories: |
| - text-to-speech |
| - audio-classification |
| tags: |
| - tts |
| - speech |
| - emotion |
| - prosody |
| pretty_name: emocean |
| size_categories: |
| - 1K<n<10K |
| --- |
| |
| # emocean |
|
|
| Emotionally expressive English TTS dataset with speaker IDs, prosody features, and emotion labels. |
|
|
| ## Dataset Summary |
|
|
| | Metric | Value | |
| |--------|-------| |
| | Total segments | 2,261 | |
| | Total duration | 3.39 hours | |
| | Speakers | 13 | |
| | Sources | 12 videos | |
| | Avg segment duration | 5.4s | |
| | Duration range | 3.0s - 8.0s | |
| | Sample rate | 24kHz mono | |
| | Format | Parquet with embedded audio | |
|
|
| ## Emotion Distribution |
|
|
| | Emotion | Count | Percentage | |
| |---------|-------|------------| |
| | neutral | 1894 | 83.8% | |
| | happy | 188 | 8.3% | |
| | sad | 148 | 6.5% | |
| | disgusted | 22 | 1.0% | |
| | fearful | 5 | 0.2% | |
| | angry | 2 | 0.1% | |
| | surprised | 2 | 0.1% | |
|
|
| ## Speaker Distribution |
|
|
| | Speaker | Segments | |
| |---------|----------| |
| | lex_fridman | 551 | |
| | pavel_durov | 397 | |
| | jeff_kaplan | 359 | |
| | norman_ohler | 256 | |
| | paul_rosolie | 122 | |
| | julia_shaw | 122 | |
| | jensen_huang | 118 | |
| | dan_houser | 97 | |
| | lars_brownworth | 87 | |
| | michael_levin | 68 | |
| | peter_steinberger | 35 | |
| | irving_finkel | 31 | |
| | david_kirtley | 18 | |
| |
| ## Dataset Structure |
| |
| | Column | Type | Description | |
| |--------|------|-------------| |
| | `audio` | Audio | Waveform + sampling rate (24kHz) | |
| | `text_verbatim` | string | Verbatim transcript with fillers (umm, uh, [laughter], etc.) | |
| | `text_verbatim_normalized` | string | Verbatim text with numbers/abbreviations expanded (keeps fillers) | |
| | `duration` | float | Segment duration in seconds | |
| | `snr` | float | Signal-to-noise ratio (dB) | |
| | `speaker_id` | string | Speaker cluster ID (WavLM embeddings) | |
| | `emotion` | string | Speech emotion label (emotion2vec+ large, 9 categories) | |
| | `pitch_mean` | float | Mean F0 frequency (Hz) | |
| | `pitch_std` | float | F0 standard deviation (Hz) | |
| | `energy_mean` | float | Mean RMS energy | |
| | `energy_std` | float | RMS energy standard deviation | |
| | `speaking_rate` | float | Words per second | |
| | `video_id` | string | YouTube video ID | |
| | `source_url` | string | Source URL | |
| | `start_time` | float | Segment start time in source (seconds) | |
| | `end_time` | float | Segment end time in source (seconds) | |
|
|
| ## Usage |
|
|
| ```python |
| from datasets import load_dataset |
| |
| ds = load_dataset("somu9/emocean", split="train") |
| |
| # Access a sample |
| sample = ds[0] |
| print(sample["audio"]) # {'path': ..., 'array': [...], 'sampling_rate': 24000} |
| print(sample["text"]) |
| print(sample["emotion"]) |
| print(sample["speaker_id"]) |
| |
| # Filter by emotion |
| happy = ds.filter(lambda x: x["emotion"] == "happy") |
| |
| # Filter by speaker |
| spk0 = ds.filter(lambda x: x["speaker_id"] == "spk_0000") |
| ``` |
|
|
| ## Collection Pipeline |
|
|
| 1. **Download** YouTube audio via yt-dlp |
| 2. **VAD** segmentation (Silero VAD) |
| 3. **Quality filter** — SNR > 25dB, clipping < 0.1%, music score < 0.5, boundary clip detection |
| 4. **Transcribe** (Whisper large-v3) |
| 5. **Enrich** — speaker embeddings (WavLM), prosody extraction, emotion classification (emotion2vec+ large) |
|
|
| ## License |
|
|
| CC-BY-4.0 |
|
|
| --- |
|
|
| *Last updated: 2026-04-23 14:07 UTC* |
|
|