Datasets:
LibriHeavy TTS (WIP, not released yet)
An improved version of LibriHeavy designed for TTS training quality. It is built on top of mythicinfinity/libriheavy and focuses on better audio/text supervision quality.
Libriheavy: a 50,000 hours ASR corpus with punctuation casing and context. Libriheavy is a labeled version of Librilight.
Audio files are encoded using the Opus 68kbps codec to retain quality and reduce size.
Why This Dataset
- Utilizes higher-fidelity LibriVox source audio.
- Adds corrected training text (
text_corrected) to reduce text/audio mismatch noise. - Filters rows that have truncated speech at the end of the audio.
- Provides VAD-based trim metadata for cleaner supervision.
- Keeps full untrimmed audio so consumers can apply their own trimming policy.
Usage (Trim Transform)
import math
from datasets import load_dataset
ds = load_dataset('mythicinfinity/libriheavy-tts', 'dev')
def trim_audio_transform(batch):
audios = batch['audio']
starts = batch['audio_trim_start_s']
ends = batch['audio_trim_end_s']
out = []
for audio, start_s, end_s in zip(audios, starts, ends):
start = float(start_s)
end = float(end_s)
sr = int(audio['sampling_rate'])
arr = audio['array']
start_idx = max(0, int(math.floor(start * sr)))
end_idx = max(0, int(math.ceil(end * sr)))
out.append({'array': arr[start_idx:end_idx], 'sampling_rate': sr})
batch['audio'] = out
return batch
ds = ds.with_transform(trim_audio_transform)
Which Text Column Should I Use?
- Use
text_correctedby default for TTS training targets. - Use
text_originalwhen you want the original reference text from the base dataset. - Use
text_transcriptionwhen you prefer transcription-style text.
Column Descriptions
audio: Full untrimmed audio waveform for each utterance.text_corrected: Corrected text intended as the primary training text.text_original: Original text from the base LibriHeavy dataset.text_transcription: Transcription text from the base LibriHeavy dataset.audio_trim_start_s: Suggested start trim boundary (seconds).audio_trim_end_s: Suggested end trim boundary (seconds).id: Utterance identifier.audio_duration: Duration in seconds from the base dataset, when present.speaker_id: Speaker identifier from the base dataset, when present.librivox_book_id: LibriVox book identifier from the base dataset, when present.
Improvement Details
1. Source Quality
- Motivation: TTS quality is sensitive to source fidelity and compression artifacts.
- Method: download higher quality LibriVox source files and extract audio segments. Also we retain the original higher sampling rate. This usually improves source audio from a 64kbps mp3 to a 128kbps mp3.
- Expected impact: cleaner acoustic detail for synthesis model learning.
2. Transcript Correction
- Motivation: text mismatches can introduce noisy supervision. We observe frequent and sometimes large mismatches.
- Method: we train a finetuned 8B LLM to match the content of text_transcription to the format of text_original. This retains the punctuation and specific word spellings the transcription model does not provide while ensuring the given text more accurately reflects what is spoken.
- Models: brthor/transcript-correction-loras.
- Expected impact: lower effective text/audio mismatch for training.
3. Truncation Detection
- Motivation: truncated endings can teach undesirable truncation bias. We observe some truncation.
- Method: truncation detection is applied and truncated samples are filtered out.
- Model: mythicinfinity/speech-truncation-detection-12M.
- Expected impact: reduced truncation-related artifacts in downstream models.
4. VAD-Based Trimming
- Motivation: excess leading/trailing silence is usually undesirable for TTS supervision.
- Method: VAD-derived trim boundaries are provided as metadata columns.
- Expected impact: easier dataset cleanup while preserving full original audio.
Configs
Each dataset config exposes a single split named train.
These are the config details of the original Libriheavy dataset.
Approximately 3% of rows have been filtered out by truncation detection.
small(train): 509 hours of speech. 417 speakers averaging 1.22 hours per speaker.medium(train): 5042 hours of speech. 1531 speakers averaging 3.29 hours per speaker.large(train): 50794 hours of speech. 6736 speakers averaging 7.54 hours per speaker.dev(train): 22.3 hours of speech. 141 speakers averaging 0.16 hours per speaker.test_clean(train): 10.5 hours of speech. 70 speakers averaging 0.15 hours per speaker.test_other(train): 11.5 hours of speech. 72 speakers averaging 0.16 hours per speaker.test_clean_large(train): 107.5 hours of speech. 72 speakers averaging 1.49 hours per speaker.test_other_large(train): 100.3 hours of speech. 73 speakers averaging 1.37 hours per speaker.
Usage
Load a Single Config
from datasets import load_dataset
small = load_dataset("mythicinfinity/libriheavy-tts", "small", split="train")
Targeting a specific config only downloads files declared for that config, which is a good way to control disk usage.
Load the Full Dataset (All Configs)
from datasets import concatenate_datasets, load_dataset
ALL_CONFIGS = [
"small",
"medium",
"large",
"dev",
"test_clean",
"test_clean_large",
"test_other",
"test_other_large",
]
def load_libriheavy_tts_all_train(configs: list[str] | None = None):
cfgs = configs or ALL_CONFIGS
parts = [load_dataset("mythicinfinity/libriheavy-tts", cfg, split="train") for cfg in cfgs]
return concatenate_datasets(parts)
full = load_libriheavy_tts_all_train()
Dataset Intended Use Cases
- Text-to-Speech (TTS)
- Automatic Speech Recognition (ASR)
Provenance, License, and Citation
- Base dataset: mythicinfinity/libriheavy
- Base project homepage: https://github.com/k2-fsa/libriheavy
- LibriHeavy paper: Libriheavy: a 50,000 hours ASR corpus with punctuation casing and context
- License: apache-2.0
Citation
@misc{kang2023libriheavy,
title={Libriheavy: a 50,000 hours ASR corpus with punctuation casing and context},
author={Wei Kang and Xiaoyu Yang and Zengwei Yao and Fangjun Kuang and Yifan Yang and Liyong Guo and Long Lin and Daniel Povey},
year={2023},
eprint={2309.08105},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
- Downloads last month
- 107
Paper for brthor/libriheavy-tts-3
Paper • 2309.08105 • Published • 1