Raon-OpenTTS-Pool / README.md
sjchung's picture
Unify image width to 600; use flat-square badges on 0.3B
8fba08d verified
metadata
license: other
license_name: mixed-per-dataset
license_link: LICENSE
language:
  - en
tags:
  - text-to-speech
  - tts
  - speech
  - audio
  - open-data
  - training-data
  - english
task_categories:
  - text-to-speech
pretty_name: Raon-OpenTTS-Pool
size_categories:
  - 100M<n<1B
configs:
  - config_name: all
    data_files:
      - split: pool
        path: '*/metadata_pool.parquet'
      - split: core
        path: '*/metadata_core.parquet'
  - config_name: Raon-YouTube-Commons
    data_files:
      - split: pool
        path: Raon-YouTube-Commons/metadata_pool.parquet
      - split: core
        path: Raon-YouTube-Commons/metadata_core.parquet
  - config_name: Emilia-YODAS2
    data_files:
      - split: pool
        path: Emilia-YODAS2/metadata_pool.parquet
      - split: core
        path: Emilia-YODAS2/metadata_core.parquet
  - config_name: Emilia
    data_files:
      - split: pool
        path: Emilia/metadata_pool.parquet
      - split: core
        path: Emilia/metadata_core.parquet
  - config_name: LibriHeavy
    data_files:
      - split: pool
        path: LibriHeavy/metadata_pool.parquet
      - split: core
        path: LibriHeavy/metadata_core.parquet
  - config_name: HiFiTTS
    data_files:
      - split: pool
        path: HiFiTTS/metadata_pool.parquet
      - split: core
        path: HiFiTTS/metadata_core.parquet
  - config_name: VoxPopuli
    data_files:
      - split: pool
        path: VoxPopuli/metadata_pool.parquet
      - split: core
        path: VoxPopuli/metadata_core.parquet
  - config_name: PeoplesSpeech-Clean
    data_files:
      - split: pool
        path: PeoplesSpeech-Clean/metadata_pool.parquet
      - split: core
        path: PeoplesSpeech-Clean/metadata_core.parquet
  - config_name: PeoplesSpeech-Dirty
    data_files:
      - split: pool
        path: PeoplesSpeech-Dirty/metadata_pool.parquet
      - split: core
        path: PeoplesSpeech-Dirty/metadata_core.parquet
  - config_name: LibriTTS-R
    data_files:
      - split: pool
        path: LibriTTS-R/metadata_pool.parquet
      - split: core
        path: LibriTTS-R/metadata_core.parquet
  - config_name: SPGISpeech2-Cut
    data_files:
      - split: pool
        path: SPGISpeech2-Cut/metadata_pool.parquet
      - split: core
        path: SPGISpeech2-Cut/metadata_core.parquet

Raon-OpenTTS-Pool

RAON-OpenTTS

Homepage GitHub Hugging Face X License

Technical Report (Coming soon)

Raon-OpenTTS-Pool is a large-scale open English speech corpus for text-to-speech (TTS) training, constructed from 8 publicly available speech corpora and a set of web-sourced recordings. It is the training data behind RAON-OpenTTS, an open TTS model that performs on par with state-of-the-art closed-data systems.

  • 615K hours of speech audio
  • 239.7M speech segments
  • 11 source datasets aggregated into a unified format
  • All audio stored as 16 kHz mono Opus (64 kbps) in WebDataset tar shards

We restrict data sources to publicly available English speech datasets with more than 500 hours of audio. All speech segments are limited to 30 seconds or shorter to reduce alignment errors, multi-speaker content, and non-speech artifacts. Existing public datasets (LibriHeavy, Emilia, VoxPopuli, etc.) are included as-is without modification, with audio standardized to 16 kHz mono Opus 64 kbps for storage efficiency. The Raon-YouTube-Commons portion is reconstructed from YouTube-Commons through a dedicated preprocessing pipeline (see below).

With a model-based filtering pipeline applied to Raon-OpenTTS-Pool, we derive Raon-OpenTTS-Core, a curated high-quality subset of 510.1K hours and 194.5M segments.

For more details, see our paper: Raon-OpenTTS: Open Models and Data for Robust Text-to-Speech

Format

Each WebDataset tar shard contains pairs of files per sample:

{sample_key}.opus   # 16 kHz mono Opus 64 kbps audio
{sample_key}.json   # {"text": "...", "duration": 8.42, "source": "..."}

Note: The dataset viewer shows metadata only (sample_key, text, duration, shard_name). Audio is stored in WebDataset tar files — see Usage below to download and load audio.

Splits

Each dataset config has two metadata splits:

  • pool — all samples (sample_key, text, duration, shard_name)
  • core — quality-filtered subset (Raon-OpenTTS-Core), retaining ~85% of the data

Raon-OpenTTS-Core Filtering

Raon-OpenTTS-Core is constructed by applying three model-based quality filters and removing the bottom 15% of samples by combined score:

  1. WER-based: Transcribe each segment with Whisper-small ASR and compute WER against the existing text annotation. Samples with excessively high WER (> 0.35) indicate severe transcription mismatches.
  2. DNSMOS-based: Estimate perceptual speech quality using DNSMOS. Samples below 2.24 indicate strong background noise or distortion.
  3. VAD-based: Estimate speech activity ratio (SAR) using Silero VAD. Samples with SAR below 0.79 are dominated by silence, music, or non-speech audio.
  4. Combined: Compute an absolute rank for each segment along each criterion (DNSMOS, WER, SAR) and average the ranks into a single combined score. Segments falling below the 15th percentile are discarded.

This combined filtering achieves the best overall TTS performance across diverse evaluation benchmarks (see paper, Figure 3).

Available Datasets

Dataset Source Size (h) Avg. Dur. (s) Segments (M) Tars License DNSMOS WER SAR
Raon-YouTube-Commons YouTube-Commons 335k 8.5 141.70 1,017 CC BY 4.0 2.74 0.30 0.90
Emilia-YODAS2 Emilia 92k 9.2 35.97 287 CC BY-NC 4.0 2.82 0.19 0.90
Emilia Emilia 47k 9.3 18.14 145 CC BY 4.0 3.02 0.18 0.89
LibriHeavy LibriHeavy 42k 14.2 10.77 127 Public Domain 3.22 0.11 0.83
HiFiTTS HiFiTTS2 37k 10.1 13.09 109 CC BY 4.0 3.20 0.11 0.84
PeoplesSpeech-Dirty People's Speech 28k 14.2 5.48 63 CC BY 4.0 2.63 0.25 0.86
VoxPopuli VoxPopuli 17k 27.8 2.24 50 CC-0 2.82 0.36 0.83
PeoplesSpeech-Clean People's Speech 10k 1.50 18 CC BY 4.0
LibriTTS-R LibriTTS-R 552 5.6 0.35 2 CC BY 4.0 2.96 0.06 0.91
SPGISpeech2-Cut SPGISpeech 2.0 889 14.4 0.22 3 Kensho UA 2.72 0.08 0.90
Total 615k 9.2 239.7 1,821 2.83 0.24 0.89

Raon-YouTube-Commons

A substantial portion of Raon-OpenTTS-Pool (335K hours) is derived from YouTube-Commons. Since the original release provides only YouTube URLs with noisy or unreliable transcriptions, we reconstructed it into a high-quality speech-text dataset through the following pipeline:

  1. Audio collection: Download audio from YouTube URLs in the original dataset
  2. Source separation (UVR-MDX): Suppress background music and non-vocal components
  3. Speaker diarization (PyAnnote 3.1): Estimate speaker boundaries to ensure single-speaker segments
  4. Voice activity detection (Silero VAD): Segment continuous speech regions into clips of 3--30 seconds
  5. Automatic transcription (Whisper-large-v3): Transcribe each segment to obtain aligned speech-text pairs
  6. Standardization: Resample to 16 kHz mono, encode as 64 kbps Opus

The resulting dataset is released as Raon-YouTube-Commons in this repository.

Non-redistributable Datasets

Two additional datasets used in training cannot be included due to license restrictions. Users who have agreed to the license on HuggingFace can automatically download and convert them using prepare_nonredist_datasets.py:

Dataset Size (h) License Source
GigaSpeech 10k License agreement required speechcolab/gigaspeech
SPGISpeech 5k Non-commercial (Kensho) kensho/spgispeech

See Preparing Non-redistributable Datasets for instructions.


Usage

1. Metadata (pool / core split)

from datasets import load_dataset

# Core metadata for a single dataset
meta = load_dataset("KRAFTON/Raon-OpenTTS-Pool", "Raon-YouTube-Commons", split="core")
# Columns: sample_key, text, duration, shard_name
print(meta[0])

# All datasets combined
all_core = load_dataset("KRAFTON/Raon-OpenTTS-Pool", "all", split="core")

2. Audio (WebDataset, local tars)

Download tars first:

from huggingface_hub import snapshot_download

local_dir = snapshot_download("KRAFTON/Raon-OpenTTS-Pool", repo_type="dataset",
                               ignore_patterns=["*.parquet"])

Then load with WebDataset:

import webdataset as wds
import json, io, soundfile as sf

dataset = (
    wds.WebDataset(f"{local_dir}/LibriTTS-R/lr-{{000000..000001}}.tar")
    .to_tuple("opus", "json")
)
for opus_bytes, json_bytes in dataset:
    meta = json.loads(json_bytes)
    audio, sr = sf.read(io.BytesIO(opus_bytes))
    text = meta["text"]

3. Core-only training

The audio tars contain pool and core samples mixed. To train on core only, filter by sample_key:

import webdataset as wds
from datasets import load_dataset
import json, io, soundfile as sf

# Step 1: load core sample keys from metadata
core_keys = set(
    load_dataset("KRAFTON/Raon-OpenTTS-Pool", "LibriTTS-R", split="core")["sample_key"]
)

# Step 2: stream tars, skip non-core samples
dataset = (
    wds.WebDataset(f"{local_dir}/LibriTTS-R/lr-{{000000..000001}}.tar")
    .select(lambda s: s["__key__"] in core_keys)
    .to_tuple("opus", "json")
)
for opus_bytes, json_bytes in dataset:
    meta = json.loads(json_bytes)
    audio, sr = sf.read(io.BytesIO(opus_bytes))
    text = meta["text"]
    duration = meta["duration"]

Preparing Non-redistributable Datasets

The script prepare_nonredist_datasets.py automatically downloads and converts GigaSpeech and SPGISpeech into the same WebDataset tar + parquet format used by Raon-OpenTTS-Pool.

Prerequisites

  1. Accept the dataset license on each HuggingFace dataset page:

  2. Set your HuggingFace token (from an account that has accepted the licenses):

    export HF_TOKEN=hf_your_token_here
    
  3. Install dependencies:

    pip install "datasets<4.0" soundfile pyarrow numpy tqdm
    

    Note: datasets>=4.0 dropped soundfile audio decoding and requires torchcodec with system FFmpeg libraries. Use datasets<4.0 (e.g. datasets==3.5.0) to avoid this.

  4. ffmpeg must be in PATH.

GigaSpeech

# Download and convert xl subset from HuggingFace Hub
python prepare_nonredist_datasets.py gigaspeech \
    --output_dir ./GigaSpeech \
    --gigaspeech_subset xl \
    --num_workers 16

# Or from a local HF snapshot (no HF_TOKEN needed)
python prepare_nonredist_datasets.py gigaspeech \
    --source_dir /path/to/gigaspeech_local \
    --output_dir ./GigaSpeech \
    --gigaspeech_subset xl

Available subsets: xs (10h), s (250h), m (1000h), l (2500h), xl (10000h)

SPGISpeech

# Download and convert L subset from HuggingFace Hub
python prepare_nonredist_datasets.py spgispeech \
    --output_dir ./SPGISpeech \
    --spgispeech_subset L \
    --num_workers 16

# Or from a local HF snapshot (no HF_TOKEN needed)
python prepare_nonredist_datasets.py spgispeech \
    --source_dir /path/to/spgispeech_local \
    --output_dir ./SPGISpeech \
    --num_workers 16

Available subsets: L (full 5000h), M (1000h), S (~200h), dev, test

Output

<output_dir>/
  {prefix}-000000.tar     # WebDataset shard (~10 GB)
  {prefix}-000001.tar
  ...
  metadata_pool.parquet   # all samples
  metadata_core.parquet   # = pool (no quality filtering without --core_json)

By default metadata_core.parquet equals metadata_pool.parquet since quality filtering requires an internal index file. If you have pool_indices_filter_remove_15pct_combined.json from the Raon-OpenTTS maintainers, pass it with --core_json to generate a filtered core split.

Using with RAON-OpenTTS training

Once prepared, pass the output directory as a nonredist_dirs entry in the training config:

datasets:
  nonredist_dirs:
    - /path/to/GigaSpeech
    - /path/to/SPGISpeech

License

This repository contains data from multiple sources, each with its own license. Users must comply with the license of each individual sub-dataset they use.

Dataset License Commercial Use
Raon-YouTube-Commons CC BY 4.0 Yes
Emilia CC BY 4.0 Yes
Emilia-YODAS2 CC BY-NC 4.0 No
LibriHeavy Public Domain (LibriVox) Yes
HiFiTTS CC BY 4.0 Yes
PeoplesSpeech-Clean / Dirty CC BY 4.0 Yes
VoxPopuli CC-0 Yes
LibriTTS-R CC BY 4.0 Yes
SPGISpeech2-Cut Kensho User Agreement Non-commercial
GigaSpeech (non-redist) License agreement required See terms
SPGISpeech (non-redist) Kensho User Agreement Non-commercial
Metadata and dataset structure CC BY 4.0 Yes

Note: Emilia-YODAS2 and SPGISpeech2-Cut are licensed under non-commercial terms. If you require fully commercial-use data, exclude these sub-datasets via the configs parameter.

Citation

@article{raon2026opentts,
  title     = {Raon-OpenTTS: Open Models and Data for Robust Text-to-Speech},
  author    = {TBD},
  year      = {2026},
  url       = {https://github.com/krafton-ai/Raon-OpenTTS}
}

© 2026 KRAFTON