Datasets:
Rename RAON-TTS to Raon-OpenTTS throughout README
Browse files
README.md
CHANGED
|
@@ -6,7 +6,7 @@ language:
|
|
| 6 |
- en
|
| 7 |
task_categories:
|
| 8 |
- text-to-speech
|
| 9 |
-
pretty_name:
|
| 10 |
size_categories:
|
| 11 |
- 100M<n<1B
|
| 12 |
configs:
|
|
@@ -78,7 +78,7 @@ configs:
|
|
| 78 |
path: SPGISpeech2-Cut/metadata_core.parquet
|
| 79 |
---
|
| 80 |
|
| 81 |
-
#
|
| 82 |
|
| 83 |
<div align="center">
|
| 84 |
<img class="block dark:hidden" src="assets/Raon-OpenTTS-Gradient-Black.png" alt="RAON-OpenTTS" width="400">
|
|
@@ -94,7 +94,7 @@ configs:
|
|
| 94 |
<a href="#license"><img src="https://img.shields.io/badge/License-Mixed%20(see%20below)-lightgrey?style=flat" alt="License"></a>
|
| 95 |
</p>
|
| 96 |
|
| 97 |
-
**
|
| 98 |
constructed from 8 publicly available speech corpora and a set of web-sourced recordings.
|
| 99 |
It is the training data behind [RAON-OpenTTS](https://github.com/krafton-ai/RAON-OpenTTS),
|
| 100 |
an open TTS model that performs on par with state-of-the-art closed-data systems.
|
|
@@ -111,8 +111,8 @@ with audio standardized to 16 kHz mono Opus 64 kbps for storage efficiency.
|
|
| 111 |
The Raon-YouTube-Commons portion is reconstructed from [YouTube-Commons](https://huggingface.co/datasets/PleIAs/YouTube-Commons)
|
| 112 |
through a dedicated preprocessing pipeline (see [below](#raon-youtube-commons)).
|
| 113 |
|
| 114 |
-
With a model-based filtering pipeline applied to
|
| 115 |
-
**
|
| 116 |
|
| 117 |
For more details, see our paper: [Raon-OpenTTS: Open Models and Data for Robust Text-to-Speech](https://github.com/krafton-ai/RAON-OpenTTS)
|
| 118 |
|
|
@@ -133,11 +133,11 @@ Each WebDataset tar shard contains pairs of files per sample:
|
|
| 133 |
Each dataset config has two metadata splits:
|
| 134 |
|
| 135 |
- **pool** — all samples (sample_key, text, duration, shard_name)
|
| 136 |
-
- **core** — quality-filtered subset (**
|
| 137 |
|
| 138 |
-
###
|
| 139 |
|
| 140 |
-
|
| 141 |
|
| 142 |
1. **WER-based**: Transcribe each segment with Whisper-small ASR and compute WER against the existing text annotation. Samples with excessively high WER (> 0.35) indicate severe transcription mismatches.
|
| 143 |
2. **DNSMOS-based**: Estimate perceptual speech quality using DNSMOS. Samples below 2.24 indicate strong background noise or distortion.
|
|
@@ -165,7 +165,7 @@ This combined filtering achieves the best overall TTS performance across diverse
|
|
| 165 |
|
| 166 |
### Raon-YouTube-Commons
|
| 167 |
|
| 168 |
-
A substantial portion of
|
| 169 |
Since the original release provides only YouTube URLs with noisy or unreliable transcriptions,
|
| 170 |
we reconstructed it into a high-quality speech-text dataset through the following pipeline:
|
| 171 |
|
|
@@ -201,12 +201,12 @@ See [Preparing Non-redistributable Datasets](#preparing-non-redistributable-data
|
|
| 201 |
from datasets import load_dataset
|
| 202 |
|
| 203 |
# Core metadata for a single dataset
|
| 204 |
-
meta = load_dataset("KRAFTON/
|
| 205 |
# Columns: sample_key, text, duration, shard_name
|
| 206 |
print(meta[0])
|
| 207 |
|
| 208 |
# All datasets combined
|
| 209 |
-
all_core = load_dataset("KRAFTON/
|
| 210 |
```
|
| 211 |
|
| 212 |
### 2. Audio (WebDataset, local tars)
|
|
@@ -216,7 +216,7 @@ Download tars first:
|
|
| 216 |
```python
|
| 217 |
from huggingface_hub import snapshot_download
|
| 218 |
|
| 219 |
-
local_dir = snapshot_download("KRAFTON/
|
| 220 |
ignore_patterns=["*.parquet"])
|
| 221 |
```
|
| 222 |
|
|
@@ -247,7 +247,7 @@ import json, io, soundfile as sf
|
|
| 247 |
|
| 248 |
# Step 1: load core sample keys from metadata
|
| 249 |
core_keys = set(
|
| 250 |
-
load_dataset("KRAFTON/
|
| 251 |
)
|
| 252 |
|
| 253 |
# Step 2: stream tars, skip non-core samples
|
|
@@ -268,7 +268,7 @@ for opus_bytes, json_bytes in dataset:
|
|
| 268 |
## Preparing Non-redistributable Datasets
|
| 269 |
|
| 270 |
The script `prepare_nonredist_datasets.py` automatically downloads and converts GigaSpeech
|
| 271 |
-
and SPGISpeech into the same WebDataset tar + parquet format used by
|
| 272 |
|
| 273 |
### Prerequisites
|
| 274 |
|
|
@@ -339,7 +339,7 @@ Available subsets: `L` (full ~5000h), `M` (~1000h), `S` (~200h), `dev`, `test`
|
|
| 339 |
|
| 340 |
By default `metadata_core.parquet` equals `metadata_pool.parquet` since quality filtering
|
| 341 |
requires an internal index file. If you have `pool_indices_filter_remove_15pct_combined.json`
|
| 342 |
-
from the
|
| 343 |
|
| 344 |
### Using with RAON-OpenTTS training
|
| 345 |
|
|
|
|
| 6 |
- en
|
| 7 |
task_categories:
|
| 8 |
- text-to-speech
|
| 9 |
+
pretty_name: Raon-OpenTTS-Pool
|
| 10 |
size_categories:
|
| 11 |
- 100M<n<1B
|
| 12 |
configs:
|
|
|
|
| 78 |
path: SPGISpeech2-Cut/metadata_core.parquet
|
| 79 |
---
|
| 80 |
|
| 81 |
+
# Raon-OpenTTS-Pool
|
| 82 |
|
| 83 |
<div align="center">
|
| 84 |
<img class="block dark:hidden" src="assets/Raon-OpenTTS-Gradient-Black.png" alt="RAON-OpenTTS" width="400">
|
|
|
|
| 94 |
<a href="#license"><img src="https://img.shields.io/badge/License-Mixed%20(see%20below)-lightgrey?style=flat" alt="License"></a>
|
| 95 |
</p>
|
| 96 |
|
| 97 |
+
**Raon-OpenTTS-Pool** is a large-scale open English speech corpus for text-to-speech (TTS) training,
|
| 98 |
constructed from 8 publicly available speech corpora and a set of web-sourced recordings.
|
| 99 |
It is the training data behind [RAON-OpenTTS](https://github.com/krafton-ai/RAON-OpenTTS),
|
| 100 |
an open TTS model that performs on par with state-of-the-art closed-data systems.
|
|
|
|
| 111 |
The Raon-YouTube-Commons portion is reconstructed from [YouTube-Commons](https://huggingface.co/datasets/PleIAs/YouTube-Commons)
|
| 112 |
through a dedicated preprocessing pipeline (see [below](#raon-youtube-commons)).
|
| 113 |
|
| 114 |
+
With a model-based filtering pipeline applied to Raon-OpenTTS-Pool, we derive
|
| 115 |
+
**Raon-OpenTTS-Core**, a curated high-quality subset of **510.1K hours** and **194.5M** segments.
|
| 116 |
|
| 117 |
For more details, see our paper: [Raon-OpenTTS: Open Models and Data for Robust Text-to-Speech](https://github.com/krafton-ai/RAON-OpenTTS)
|
| 118 |
|
|
|
|
| 133 |
Each dataset config has two metadata splits:
|
| 134 |
|
| 135 |
- **pool** — all samples (sample_key, text, duration, shard_name)
|
| 136 |
+
- **core** — quality-filtered subset (**Raon-OpenTTS-Core**), retaining ~85% of the data
|
| 137 |
|
| 138 |
+
### Raon-OpenTTS-Core Filtering
|
| 139 |
|
| 140 |
+
Raon-OpenTTS-Core is constructed by applying three model-based quality filters and removing the bottom 15% of samples by combined score:
|
| 141 |
|
| 142 |
1. **WER-based**: Transcribe each segment with Whisper-small ASR and compute WER against the existing text annotation. Samples with excessively high WER (> 0.35) indicate severe transcription mismatches.
|
| 143 |
2. **DNSMOS-based**: Estimate perceptual speech quality using DNSMOS. Samples below 2.24 indicate strong background noise or distortion.
|
|
|
|
| 165 |
|
| 166 |
### Raon-YouTube-Commons
|
| 167 |
|
| 168 |
+
A substantial portion of Raon-OpenTTS-Pool (335K hours) is derived from [YouTube-Commons](https://huggingface.co/datasets/PleIAs/YouTube-Commons).
|
| 169 |
Since the original release provides only YouTube URLs with noisy or unreliable transcriptions,
|
| 170 |
we reconstructed it into a high-quality speech-text dataset through the following pipeline:
|
| 171 |
|
|
|
|
| 201 |
from datasets import load_dataset
|
| 202 |
|
| 203 |
# Core metadata for a single dataset
|
| 204 |
+
meta = load_dataset("KRAFTON/Raon-OpenTTS-Pool", "Raon-YouTube-Commons", split="core")
|
| 205 |
# Columns: sample_key, text, duration, shard_name
|
| 206 |
print(meta[0])
|
| 207 |
|
| 208 |
# All datasets combined
|
| 209 |
+
all_core = load_dataset("KRAFTON/Raon-OpenTTS-Pool", "all", split="core")
|
| 210 |
```
|
| 211 |
|
| 212 |
### 2. Audio (WebDataset, local tars)
|
|
|
|
| 216 |
```python
|
| 217 |
from huggingface_hub import snapshot_download
|
| 218 |
|
| 219 |
+
local_dir = snapshot_download("KRAFTON/Raon-OpenTTS-Pool", repo_type="dataset",
|
| 220 |
ignore_patterns=["*.parquet"])
|
| 221 |
```
|
| 222 |
|
|
|
|
| 247 |
|
| 248 |
# Step 1: load core sample keys from metadata
|
| 249 |
core_keys = set(
|
| 250 |
+
load_dataset("KRAFTON/Raon-OpenTTS-Pool", "LibriTTS-R", split="core")["sample_key"]
|
| 251 |
)
|
| 252 |
|
| 253 |
# Step 2: stream tars, skip non-core samples
|
|
|
|
| 268 |
## Preparing Non-redistributable Datasets
|
| 269 |
|
| 270 |
The script `prepare_nonredist_datasets.py` automatically downloads and converts GigaSpeech
|
| 271 |
+
and SPGISpeech into the same WebDataset tar + parquet format used by Raon-OpenTTS-Pool.
|
| 272 |
|
| 273 |
### Prerequisites
|
| 274 |
|
|
|
|
| 339 |
|
| 340 |
By default `metadata_core.parquet` equals `metadata_pool.parquet` since quality filtering
|
| 341 |
requires an internal index file. If you have `pool_indices_filter_remove_15pct_combined.json`
|
| 342 |
+
from the Raon-OpenTTS maintainers, pass it with `--core_json` to generate a filtered core split.
|
| 343 |
|
| 344 |
### Using with RAON-OpenTTS training
|
| 345 |
|